Security and the Internet of Things

Last year, on October 21, your digital video recorder ­- or at least a DVR like yours ­- knocked Twitter off the internet. Someone used your DVR, along with millions of insecure webcams, routers, and other connected devices, to launch an attack that started a chain reaction, resulting in Twitter, Reddit, Netflix, and many sites going off the internet. You probably didn’t realize that your DVR had that kind of power. But it does.

All computers are hackable. This has as much to do with the computer market as it does with the technologies. We prefer our software full of features and inexpensive, at the expense of security and reliability. That your computer can affect the security of Twitter is a market failure. The industry is filled with market failures that, until now, have been largely ignorable. As computers continue to permeate our homes, cars, businesses, these market failures will no longer be tolerable. Our only solution will be regulation, and that regulation will be foisted on us by a government desperate to “do something” in the face of disaster.

In this article I want to outline the problems, both technical and political, and point to some regulatory solutions. Regulation might be a dirty word in today’s political climate, but security is the exception to our small-government bias. And as the threats posed by computers become greater and more catastrophic, regulation will be inevitable. So now’s the time to start thinking about it.

We also need to reverse the trend to connect everything to the internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized.

If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the internet that has given us so much.

**********

We no longer have things with computers embedded in them. We have computers with things attached to them.

Your modern refrigerator is a computer that keeps things cold. Your oven, similarly, is a computer that makes things hot. An ATM is a computer with money inside. Your car is no longer a mechanical device with some computers inside; it’s a computer with four wheels and an engine. Actually, it’s a distributed system of over 100 computers with four wheels and an engine. And, of course, your phones became full-power general-purpose computers in 2007, when the iPhone was introduced.

We wear computers: fitness trackers and computer-enabled medical devices ­- and, of course, we carry our smartphones everywhere. Our homes have smart thermostats, smart appliances, smart door locks, even smart light bulbs. At work, many of those same smart devices are networked together with CCTV cameras, sensors that detect customer movements, and everything else. Cities are starting to embed smart sensors in roads, streetlights, and sidewalk squares, also smart energy grids and smart transportation networks. A nuclear power plant is really just a computer that produces electricity, and ­- like everything else we’ve just listed -­ it’s on the internet.

The internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things.

Broadly speaking, the Internet of Things has three parts. There are the sensors that collect data about us and our environment: smart thermostats, street and highway sensors, and those ubiquitous smartphones with their motion sensors and GPS location receivers. Then there are the “smarts” that figure out what the data means and what to do about it. This includes all the computer processors on these devices and ­- increasingly ­- in the cloud, as well as the memory that stores all of this information. And finally, there are the actuators that affect our environment. The point of a smart thermostat isn’t to record the temperature; it’s to control the furnace and the air conditioner. Driverless cars collect data about the road and the environment to steer themselves safely to their destinations.

You can think of the sensors as the eyes and ears of the internet. You can think of the actuators as the hands and feet of the internet. And you can think of the stuff in the middle as the brain. We are building an internet that senses, thinks, and acts.

This is the classic definition of a robot. We’re building a world-size robot, and we don’t even realize it.

To be sure, it’s not a robot in the classical sense. We think of robots as discrete autonomous entities, with sensors, brain, and actuators all together in a metal shell. The world-size robot is distributed. It doesn’t have a singular body, and parts of it are controlled in different ways by different people. It doesn’t have a central brain, and it has nothing even remotely resembling a consciousness. It doesn’t have a single goal or focus. It’s not even something we deliberately designed. It’s something we have inadvertently built out of the everyday objects we live with and take for granted. It is the extension of our computers and networks into the real world.

This world-size robot is actually more than the Internet of Things. It’s a combination of several decades-old computing trends: mobile computing, cloud computing, always-on computing, huge databases of personal information, the Internet of Things ­- or, more precisely, cyber-physical systems ­- autonomy, and artificial intelligence. And while it’s still not very smart, it’ll get smarter. It’ll get more powerful and more capable through all the interconnections we’re building.

It’ll also get much more dangerous.

**********

Computer security has been around for almost as long as computers have been. And while it’s true that security wasn’t part of the design of the original internet, it’s something we have been trying to achieve since its beginning.

I have been working in computer security for over 30 years: first in cryptography, then more generally in computer and network security, and now in general security technology. I have watched computers become ubiquitous, and have seen firsthand the problems ­- and solutions ­- of securing these complex machines and systems. I’m telling you all this because what used to be a specialized area of expertise now affects everything. Computer security is now everything security. There’s one critical difference, though: The threats have become greater.

Traditionally, computer security is divided into three categories: confidentiality, integrity, and availability. For the most part, our security concerns have largely centered around confidentiality. We’re concerned about our data and who has access to it ­- the world of privacy and surveillance, of data theft and misuse.

But threats come in many forms. Availability threats: computer viruses that delete our data, or ransomware that encrypts our data and demands payment for the unlock key. Integrity threats: hackers who can manipulate data entries can do things ranging from changing grades in a class to changing the amount of money in bank accounts. Some of these threats are pretty bad. Hospitals have paid tens of thousands of dollars to criminals whose ransomware encrypted critical medical files. JPMorgan Chase spends half a billion on cybersecurity a year.

Today, the integrity and availability threats are much worse than the confidentiality threats. Once computers start affecting the world in a direct and physical manner, there are real risks to life and property. There is a fundamental difference between crashing your computer and losing your spreadsheet data, and crashing your pacemaker and losing your life. This isn’t hyperbole; recently researchers found serious security vulnerabilities in St. Jude Medical’s implantable heart devices. Give the internet hands and feet, and it will have the ability to punch and kick.

Take a concrete example: modern cars, those computers on wheels. The steering wheel no longer turns the axles, nor does the accelerator pedal change the speed. Every move you make in a car is processed by a computer, which does the actual controlling. A central computer controls the dashboard. There’s another in the radio. The engine has 20 or so computers. These are all networked, and increasingly autonomous.

Now, let’s start listing the security threats. We don’t want car navigation systems to be used for mass surveillance, or the microphone for mass eavesdropping. We might want it to be used to determine a car’s location in the event of a 911 call, and possibly to collect information about highway congestion. We don’t want people to hack their own cars to bypass emissions-control limitations. We don’t want manufacturers or dealers to be able to do that, either, as Volkswagen did for years. We can imagine wanting to give police the ability to remotely and safely disable a moving car; that would make high-speed chases a thing of the past. But we definitely don’t want hackers to be able to do that. We definitely don’t want them disabling the brakes in every car without warning, at speed. As we make the transition from driver-controlled cars to cars with various driver-assist capabilities to fully driverless cars, we don’t want any of those critical components subverted. We don’t want someone to be able to accidentally crash your car, let alone do it on purpose. And equally, we don’t want them to be able to manipulate the navigation software to change your route, or the door-lock controls to prevent you from opening the door. I could go on.

That’s a lot of different security requirements, and the effects of getting them wrong range from illegal surveillance to extortion by ransomware to mass death.

**********

Our computers and smartphones are as secure as they are because companies like Microsoft, Apple, and Google spend a lot of time testing their code before it’s released, and quickly patch vulnerabilities when they’re discovered. Those companies can support large, dedicated teams because those companies make a huge amount of money, either directly or indirectly, from their software ­ and, in part, compete on its security. Unfortunately, this isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don’t have the expertise to make them secure.

At a recent hacker conference, a security researcher analyzed 30 home routers and was able to break into half of them, including some of the most popular and common brands. The denial-of-service attacks that forced popular websites like Reddit and Twitter off the internet last October were enabled by vulnerabilities in devices like webcams and digital video recorders. In August, two security researchers demonstrated a ransomware attack on a smart thermostat.

Even worse, most of these devices don’t have any way to be patched. Companies like Microsoft and Apple continuously deliver security patches to your computers. Some home routers are technically patchable, but in a complicated way that only an expert would attempt. And the only way for you to update the firmware in your hackable DVR is to throw it away and buy a new one.

The market can’t fix this because neither the buyer nor the seller cares. The owners of the webcams and DVRs used in the denial-of-service attacks don’t care. Their devices were cheap to buy, they still work, and they don’t know any of the victims of the attacks. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution, because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

**********

Security is an arms race between attacker and defender. Technology perturbs that arms race by changing the balance between attacker and defender. Understanding how this arms race has unfolded on the internet is essential to understanding why the world-size robot we’re building is so insecure, and how we might secure it. To that end, I have five truisms, born from what we’ve already learned about computer and internet security. They will soon affect the security arms race everywhere.

Truism No. 1: On the internet, attack is easier than defense.

There are many reasons for this, but the most important is the complexity of these systems. More complexity means more people involved, more parts, more interactions, more mistakes in the design and development process, more of everything where hidden insecurities can be found. Computer-security experts like to speak about the attack surface of a system: all the possible points an attacker might target and that must be secured. A complex system means a large attack surface. The defender has to secure the entire attack surface. The attacker just has to find one vulnerability ­- one unsecured avenue for attack -­ and gets to choose how and when to attack. It’s simply not a fair battle.

There are other, more general, reasons why attack is easier than defense. Attackers have a natural agility that defenders often lack. They don’t have to worry about laws, and often not about morals or ethics. They don’t have a bureaucracy to contend with, and can more quickly make use of technical innovations. Attackers also have a first-mover advantage. As a society, we’re generally terrible at proactive security; we rarely take preventive security measures until an attack actually happens. So more advantages go to the attacker.

Truism No. 2: Most software is poorly written and insecure.

If complexity isn’t enough, we compound the problem by producing lousy software. Well-written software, like the kind found in airplane avionics, is both expensive and time-consuming to produce. We don’t want that. For the most part, poorly written software has been good enough. We’d all rather live with buggy software than pay the prices good software would require. We don’t mind if our games crash regularly, or our business applications act weird once in a while. Because software has been largely benign, it hasn’t mattered. This has permeated the industry at all levels. At universities, we don’t teach how to code well. Companies don’t reward quality code in the same way they reward fast and cheap. And we consumers don’t demand it.

But poorly written software is riddled with bugs, sometimes as many as one per 1,000 lines of code. Some of them are inherent in the complexity of the software, but most are programming mistakes. Not all bugs are vulnerabilities, but some are.

Truism No. 3: Connecting everything to each other via the internet will expose new vulnerabilities.

The more we network things together, the more vulnerabilities on one thing will affect other things. On October 21, vulnerabilities in a wide variety of embedded devices were all harnessed together to create what hackers call a botnet. This botnet was used to launch a distributed denial-of-service attack against a company called Dyn. Dyn provided a critical internet function for many major internet sites. So when Dyn went down, so did all those popular websites.

These chains of vulnerabilities are everywhere. In 2012, journalist Mat Honan suffered a massive personal hack because of one of them. A vulnerability in his Amazon account allowed hackers to get into his Apple account, which allowed them to get into his Gmail account. And in 2013, the Target Corporation was hacked by someone stealing credentials from its HVAC contractor.

Vulnerabilities like these are particularly hard to fix, because no one system might actually be at fault. It might be the insecure interaction of two individually secure systems.

Truism No. 4: Everybody has to stop the best attackers in the world.

One of the most powerful properties of the internet is that it allows things to scale. This is true for our ability to access data or control systems or do any of the cool things we use the internet for, but it’s also true for attacks. In general, fewer attackers can do more damage because of better technology. It’s not just that these modern attackers are more efficient, it’s that the internet allows attacks to scale to a degree impossible without computers and networks.

This is fundamentally different from what we’re used to. When securing my home against burglars, I am only worried about the burglars who live close enough to my home to consider robbing me. The internet is different. When I think about the security of my network, I have to be concerned about the best attacker possible, because he’s the one who’s going to create the attack tool that everyone else will use. The attacker that discovered the vulnerability used to attack Dyn released the code to the world, and within a week there were a dozen attack tools using it.

Truism No. 5: Laws inhibit security research.

The Digital Millennium Copyright Act is a terrible law that fails at its purpose of preventing widespread piracy of movies and music. To make matters worse, it contains a provision that has critical side effects. According to the law, it is a crime to bypass security mechanisms that protect copyrighted work, even if that bypassing would otherwise be legal. Since all software can be copyrighted, it is arguably illegal to do security research on these devices and to publish the result.

Although the exact contours of the law are arguable, many companies are using this provision of the DMCA to threaten researchers who expose vulnerabilities in their embedded systems. This instills fear in researchers, and has a chilling effect on research, which means two things: (1) Vendors of these devices are more likely to leave them insecure, because no one will notice and they won’t be penalized in the market, and (2) security engineers don’t learn how to do security better.
Unfortunately, companies generally like the DMCA. The provisions against reverse-engineering spare them the embarrassment of having their shoddy security exposed. It also allows them to build proprietary systems that lock out competition. (This is an important one. Right now, your toaster cannot force you to only buy a particular brand of bread. But because of this law and an embedded computer, your Keurig coffee maker can force you to buy a particular brand of coffee.)

**********
In general, there are two basic paradigms of security. We can either try to secure something well the first time, or we can make our security agile. The first paradigm comes from the world of dangerous things: from planes, medical devices, buildings. It’s the paradigm that gives us secure design and secure engineering, security testing and certifications, professional licensing, detailed preplanning and complex government approvals, and long times-to-market. It’s security for a world where getting it right is paramount because getting it wrong means people dying.

The second paradigm comes from the fast-moving and heretofore largely benign world of software. In this paradigm, we have rapid prototyping, on-the-fly updates, and continual improvement. In this paradigm, new vulnerabilities are discovered all the time and security disasters regularly happen. Here, we stress survivability, recoverability, mitigation, adaptability, and muddling through. This is security for a world where getting it wrong is okay, as long as you can respond fast enough.

These two worlds are colliding. They’re colliding in our cars -­ literally -­ in our medical devices, our building control systems, our traffic control systems, and our voting machines. And although these paradigms are wildly different and largely incompatible, we need to figure out how to make them work together.

So far, we haven’t done very well. We still largely rely on the first paradigm for the dangerous computers in cars, airplanes, and medical devices. As a result, there are medical systems that can’t have security patches installed because that would invalidate their government approval. In 2015, Chrysler recalled 1.4 million cars to fix a software vulnerability. In September 2016, Tesla remotely sent a security patch to all of its Model S cars overnight. Tesla sure sounds like it’s doing things right, but what vulnerabilities does this remote patch feature open up?

**********
Until now we’ve largely left computer security to the market. Because the computer and network products we buy and use are so lousy, an enormous after-market industry in computer security has emerged. Governments, companies, and people buy the security they think they need to secure themselves. We’ve muddled through well enough, but the market failures inherent in trying to secure this world-size robot will soon become too big to ignore.

Markets alone can’t solve our security problems. Markets are motivated by profit and short-term goals at the expense of society. They can’t solve collective-action problems. They won’t be able to deal with economic externalities, like the vulnerabilities in DVRs that resulted in Twitter going offline. And we need a counterbalancing force to corporate power.

This all points to policy. While the details of any computer-security system are technical, getting the technologies broadly deployed is a problem that spans law, economics, psychology, and sociology. And getting the policy right is just as important as getting the technology right because, for internet security to work, law and technology have to work together. This is probably the most important lesson of Edward Snowden’s NSA disclosures. We already knew that technology can subvert law. Snowden demonstrated that law can also subvert technology. Both fail unless each work. It’s not enough to just let technology do its thing.

Any policy changes to secure this world-size robot will mean significant government regulation. I know it’s a sullied concept in today’s world, but I don’t see any other possible solution. It’s going to be especially difficult on the internet, where its permissionless nature is one of the best things about it and the underpinning of its most world-changing innovations. But I don’t see how that can continue when the internet can affect the world in a direct and physical manner.

**********

I have a proposal: a new government regulatory agency. Before dismissing it out of hand, please hear me out.

We have a practical problem when it comes to internet regulation. There’s no government structure to tackle this at a systemic level. Instead, there’s a fundamental mismatch between the way government works and the way this technology works that makes dealing with this problem impossible at the moment.

Government operates in silos. In the U.S., the FAA regulates aircraft. The NHTSA regulates cars. The FDA regulates medical devices. The FCC regulates communications devices. The FTC protects consumers in the face of “unfair” or “deceptive” trade practices. Even worse, who regulates data can depend on how it is used. If data is used to influence a voter, it’s the Federal Election Commission’s jurisdiction. If that same data is used to influence a consumer, it’s the FTC’s. Use those same technologies in a school, and the Department of Education is now in charge. Robotics will have its own set of problems, and no one is sure how that is going to be regulated. Each agency has a different approach and different rules. They have no expertise in these new issues, and they are not quick to expand their authority for all sorts of reasons.

Compare that with the internet. The internet is a freewheeling system of integrated objects and networks. It grows horizontally, demolishing old technological barriers so that people and systems that never previously communicated now can. Already, apps on a smartphone can log health information, control your energy use, and communicate with your car. That’s a set of functions that crosses jurisdictions of at least four different government agencies, and it’s only going to get worse.

Our world-size robot needs to be viewed as a single entity with millions of components interacting with each other. Any solutions here need to be holistic. They need to work everywhere, for everything. Whether we’re talking about cars, drones, or phones, they’re all computers.

This has lots of precedent. Many new technologies have led to the formation of new government regulatory agencies. Trains did, cars did, airplanes did. Radio led to the formation of the Federal Radio Commission, which became the FCC. Nuclear power led to the formation of the Atomic Energy Commission, which eventually became the Department of Energy. The reasons were the same in every case. New technologies need new expertise because they bring with them new challenges. Governments need a single agency to house that new expertise, because its applications cut across several preexisting agencies. It’s less that the new agency needs to regulate -­ although that’s often a big part of it -­ and more that governments recognize the importance of the new technologies.

The internet has famously eschewed formal regulation, instead adopting a multi-stakeholder model of academics, businesses, governments, and other interested parties. My hope is that we can keep the best of this approach in any regulatory agency, looking more at the new U.S. Digital Service or the 18F office inside the General Services Administration. Both of those organizations are dedicated to providing digital government services, and both have collected significant expertise by bringing people in from outside of government, and both have learned how to work closely with existing agencies. Any internet regulatory agency will similarly need to engage in a high level of collaborate regulation -­ both a challenge and an opportunity.

I don’t think any of us can predict the totality of the regulations we need to ensure the safety of this world, but here’s a few. We need government to ensure companies follow good security practices: testing, patching, secure defaults -­ and we need to be able to hold companies liable when they fail to do these things. We need government to mandate strong personal data protections, and limitations on data collection and use. We need to ensure that responsible security research is legal and well-funded. We need to enforce transparency in design, some sort of code escrow in case a company goes out of business, and interoperability between devices of different manufacturers, to counterbalance the monopolistic effects of interconnected technologies. Individuals need the right to take their data with them. And internet-enabled devices should retain some minimal functionality if disconnected from the internet.

I’m not the only one talking about this. I’ve seen proposals for a National Institutes of Health analogue for cybersecurity. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission. I think it needs to be broader: maybe a Department of Technology Policy.

Of course there will be problems. There’s a lack of expertise in these issues inside government. There’s a lack of willingness in government to do the hard regulatory work. Industry is worried about any new bureaucracy: both that it will stifle innovation by regulating too much and that it will be captured by industry and regulate too little. A domestic regulatory agency will have to deal with the fundamentally international nature of the problem.

But government is the entity we use to solve problems like this. Governments have the scope, scale, and balance of interests to address the problems. It’s the institution we’ve built to adjudicate competing social interests and internalize market externalities. Left to their own devices, the market simply can’t. That we’re currently in the middle of an era of low government trust, where many of us can’t imagine government doing anything positive in an area like this, is to our detriment.

Here’s the thing: Governments will get involved, regardless. The risks are too great, and the stakes are too high. Government already regulates dangerous physical systems like cars and medical devices. And nothing motivates the U.S. government like fear. Remember 2001? A nominally small-government Republican president created the Office of Homeland Security 11 days after the terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for over a decade. A fatal disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important, and complex; and they’re coming. We can’t afford to ignore these issues until it’s too late.

We also need to start disconnecting systems. If we cannot secure complex systems to the level required by their real-world capabilities, then we must not build a world where everything is computerized and interconnected.

There are other models. We can enable local communications only. We can set limits on collected and stored data. We can deliberately design systems that don’t interoperate with each other. We can deliberately fetter devices, reversing the current trend of turning everything into a general-purpose computer. And, most important, we can move toward less centralization and more distributed systems, which is how the internet was first envisioned.

This might be a heresy in today’s race to network everything, but large, centralized systems are not inevitable. The technical elites are pushing us in that direction, but they really don’t have any good supporting arguments other than the profits of their ever-growing multinational corporations.

But this will change. It will change not only because of security concerns, it will also change because of political concerns. We’re starting to chafe under the worldview of everything producing data about us and what we do, and that data being available to both governments and corporations. Surveillance capitalism won’t be the business model of the internet forever. We need to change the fabric of the internet so that evil governments don’t have the tools to create a horrific totalitarian state. And while good laws and regulations in Western democracies are a great second line of defense, they can’t be our only line of defense.

My guess is that we will soon reach a high-water mark of computerization and connectivity, and that afterward we will make conscious decisions about what and how we decide to interconnect. But we’re still in the honeymoon phase of connectivity. Governments and corporations are punch-drunk on our data, and the rush to connect everything is driven by an even greater desire for power and market share. One of the presentations released by Edward Snowden contained the NSA mantra: “Collect it all.” A similar mantra for the internet today might be: “Connect it all.”

The inevitable backlash will not be driven by the market. It will be deliberate policy decisions that put the safety and welfare of society above individual corporations and industries. It will be deliberate policy decisions that prioritize the security of our systems over the demands of the FBI to weaken them in order to make their law-enforcement jobs easier. It’ll be hard policy for many to swallow, but our safety will depend on it.

**********

The scenarios I’ve outlined, both the technological and economic trends that are causing them and the political changes we need to make to start to fix them, come from my years of working in internet-security technology and policy. All of this is informed by an understanding of both technology and policy. That turns out to be critical, and there aren’t enough people who understand both.

This brings me to my final plea: We need more public-interest technologists.

Over the past couple of decades, we’ve seen examples of getting internet-security policy badly wrong. I’m thinking of the FBI’s “going dark” debate about its insistence that computer devices be designed to facilitate government access, the “vulnerability equities process” about when the government should disclose and fix a vulnerability versus when it should use it to attack other systems, the debacle over paperless touch-screen voting machines, and the DMCA that I discussed above. If you watched any of these policy debates unfold, you saw policy-makers and technologists talking past each other.

Our world-size robot will exacerbate these problems. The historical divide between Washington and Silicon Valley -­ the mistrust of governments by tech companies and the mistrust of tech companies by governments ­- is dangerous.

We have to fix this. Getting IoT security right depends on the two sides working together and, even more important, having people who are experts in each working on both. We need technologists to get involved in policy, and we need policy-makers to get involved in technology. We need people who are experts in making both technology and technological policy. We need technologists on congressional staffs, inside federal agencies, working for NGOs, and as part of the press. We need to create a viable career path for public-interest technologists, much as there already is one for public-interest attorneys. We need courses, and degree programs in colleges, for people interested in careers in public-interest technology. We need fellowships in organizations that need these people. We need technology companies to offer sabbaticals for technologists wanting to go down this path. We need an entire ecosystem that supports people bridging the gap between technology and law. We need a viable career path that ensures that even though people in this field won’t make as much as they would in a high-tech start-up, they will have viable careers. The security of our computerized and networked future ­ meaning the security of ourselves, families, homes, businesses, and communities ­ depends on it.

This plea is bigger than security, actually. Pretty much all of the major policy debates of this century will have a major technological component. Whether it’s weapons of mass destruction, robots drastically affecting employment, climate change, food safety, or the increasing ubiquity of ever-shrinking drones, understanding the policy means understanding the technology. Our society desperately needs technologists working on the policy. The alternative is bad policy.

**********

The world-size robot is less designed than created. It’s coming without any forethought or architecting or planning; most of us are completely unaware of what we’re building. In fact, I am not convinced we can actually design any of this. When we try to design complex sociotechnical systems like this, we are regularly surprised by their emergent properties. The best we can do is observe and channel these properties as best we can.

Market thinking sometimes makes us lose sight of the human choices and autonomy at stake. Before we get controlled ­ or killed ­ by the world-size robot, we need to rebuild confidence in our collective governance institutions. Law and policy may not seem as cool as digital tech, but they’re also places of critical innovation. They’re where we collectively bring about the world we want to live in.

While I might sound like a Cassandra, I’m actually optimistic about our future. Our society has tackled bigger problems than this one. It takes work and it’s not easy, but we eventually find our way clear to make the hard choices necessary to solve our real problems.

The world-size robot we’re building can only be managed responsibly if we start making real choices about the interconnected world we live in. Yes, we need security systems as robust as the threat landscape. But we also need laws that effectively regulate these dangerous technologies. And, more generally, we need to make moral, ethical, and political decisions on how those systems should work. Until now, we’ve largely left the internet alone. We gave programmers a special right to code cyberspace as they saw fit. This was okay because cyberspace was separate and relatively unimportant: That is, it didn’t matter. Now that that’s changed, we can no longer give programmers and the companies they work for this power. Those moral, ethical, and political decisions need, somehow, to be made by everybody. We need to link people with the same zeal that we are currently linking machines. “Connect it all” must be countered with “connect us all.”

This essay previously appeared in New York Magazine.

Posted on February 1, 2017 at 8:05 AM126 Comments

Comments

John February 1, 2017 8:24 AM

“security is the exception to our small-government bias”

No it isn’t . Please do not invoke “we”, “them” “us” und so weiter to support
your personal opinion(s) .
And since neither small- nor big-government has a record of being able to secure even Top Secret and above data – Why would they be able to secure it for the rest of us ?

CallMeLateForSupper February 1, 2017 9:07 AM

@John
“Please do not invoke “we”, “them” “us” und so weiter to support
your personal opinion(s).”

No explanations; no expansions; no suggestions; total negativity.
Troll.

(“und so weiter”: “and so forth”)

Wael February 1, 2017 9:27 AM

@John, @CallMeLateForSupper,

“Please do not invoke “we”, “them” “us”

Clarification:

That’s how professional writers express themselves. To use pronouns such as “You”, “I” (something I’m guilty of), “she”, and “he” is a good indicator that the writer is subjective. If one looks at any comment and sees excessive usage of pronouns, especially “you” and “I”, then the commenter is likely being subjective rather than objective.

r February 1, 2017 9:35 AM

For the sake of being a troll, and supporting what I view as a more mature view of the market and marketing of technology including aspects of national and international security where resources and pollution are concerned:

The market can’t fix this because neither the buyer nor the seller cares. The owners of the webcams and DVRs used in the denial-of-service attacks don’t care. Their devices were cheap to buy, they still work, and they don’t know any of the victims of the attacks. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution, because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

I don’t believe that, specifically: I believe that a market solution does exist but that we haven’t realized or initiated it yet. (Maybe it’s not a market solution, but marked interference with the ‘markettes’) In the last sentence, you refer to it as ‘invisible pollution’; but I don’t believe the pollution of throw aways is truly invisible: we bury it in landfills every day. Mercury, REE’s, platinum and gold, silicon wafers and germaniun transceivers – these are not unlimited resources. These are finite resources and we bury both that fact and the mounting evidence. A movement back into open and upgradable wouldn’t be a complete solution but it would be a small dike against the problem of this ever rising tide of ‘deniable and aggravated pollution’.

r February 1, 2017 9:41 AM

The ‘markettes’, can implement a solution but it requires the acknowledgement that mined resources are absolutely finite.

So what if the quickest route to such a realization is through regulation and constriction?

Is not a wake up call in the name of [inter]national interest and security at the hands of the government just as qualified as a market waking up to reality like the diamond market of debeer’s

?

The end result is the same.

Harold Martin February 1, 2017 9:52 AM

Bruce claims: “Our computers and smartphones are as secure as they are because companies like Microsoft, Apple, and Google spend a lot of time testing their code before it’s released, and quickly patch vulnerabilities when they’re discovered.”

Except when they make secret deals with spies to install back doors. Schneier is revealing his bias towards corporate execs, offering them cover with unspoken assumptions like the above. It’s so obvious.

Perhaps now we know why he is such a celebrated “expert.” He offers public relations in the name of the Bay Area’s bottom line. Please, trust us, our quarterly profits depend upon it.

HM February 1, 2017 9:57 AM

DVRs are actually an example of a product that does download automatic software updates, at least for access-controlled content such as the DirecTV box I have here. In fact, customers don’t get to disable these updates in part because the TV provider presumably wants to prevent any customer attempts to access unauthorized content.

Other IoT boxes and “Smart TVs” are a whole different mess though.

Part of the problem with IoT is that it would be better if we could lock down a smart thermostat so it’s only able to send and receive temperature data, and only from a restricted set of servers, not contain a full network stack and initiate connections to any random server on the net. Enabling such limited access at a home/business gateway level seems like it would help.

r February 1, 2017 9:57 AM

@Harold Martin,

But we can detect that bias readily, would you rather we have the opaque sounds of say the FBI or DHS pushing these positions? Would they be as argumentative as they are coming from an industry ‘insider’ as you so politely slant?

Bruce biased? February 1, 2017 9:58 AM

@Harold Martin

“Schneier is revealing his bias towards corporate execs”

Bruce biased? Nooooo!!! Where did you get that from? Look, what has happened to Bruce is very sad. His becoming a publicly recognizable figure in computer security was contemporary to the raise of the Google/Apple/Obama mafia. He was just echoing their talking points, and he continues to do so.

Only now, the landscape is very different both politically and technologically. So his talking points are now more recognizable than before because they can be checked against the opinions of other people who have as loud a loudspeaker as his.

Doktor Jon February 1, 2017 9:58 AM

Outstanding article Bruce!; if only to try and get people to think more realistically about the Internet of Corruptible Things.
Much of what you say doesn’t just apply to the Internet, but in many respects wider security in the real world.
The commonly held belief that all technology, and particularly ‘state of the art’ technology, is through evolution in some way more able to provide enhanced levels of security and safety, unfortunately overlooks many basic security principles which have been known and understood for so many decades ( if not centuries!).
An interesting discussional point relates to the issue of governance versus self regulation. On this side of the pond, “soft touch” regulation is very much de rigeur, but unfortunately it just doesn’t really work.
Where technical and operational improvements need to be made, something that could have been achieved in a reasonable time frame, ends up taking an eternity ( if ever) by which time the original problem has morphed into an absolute monster, with little if any chance of bringing it back in to check.
If major industry players were mindful to pool resources and create a security alliance, I would imagine tangible progress could be made quite quickly, but historically that’s not even been on the table.
No doubt the obvious difficulty in terms of reliance on a government agency taking a lead role, is the real world is full of governments, and few if any are in any fit shape to take on internet based criminality within the scale and sophistication that has recently been demonstrated. That said, it’s perhaps not unreasonable to assume that as bad as some recent incidents have been, they may well pale into insignificance if more concerted, destructive and debilitating attacks are unleashed by those with the most motivation and resources.
Doing nothing is not an option worthy of consideration, but quite how “deeply concerned” politicians and civil servants can be persuaded to wake up and see where we’re at, and how we managed to get here, is going to be somewhat challenging … to put it mildly.
Doktor Jon – The CCTV Improvement Project – London U.K.

Clive Robinson February 1, 2017 10:10 AM

@ r,

I believe that a market solution does exist but that we haven’t realized or initiated it yet.

If a market was compleatly lawless then yes there are market solutions, but you probably do not want them for various reasons.

The soloution is possible by “product liability insurance” where the manufacture posts a bond in advance to insure against their products. However the cost of such bonds would be astronomical and thus whole product sectors would never get developeb let alone put on the market.

r February 1, 2017 10:17 AM

@Clive,

I understand how dangerous using debeer’s as an example is, thank you to the absolute extent of my sincerity for that polite nudge.

It’s an NP hard problem I suspect, but we can’t stop trying can we?

r February 1, 2017 10:21 AM

@Clive, CC: All (@ab && @nickp specifically)

Those bonds could be avoided through proofs and correct coding, well – ignoring the environmental impacts.

I guess it’s still a form of taxation, to bond or not to bond + vetting.

Major February 1, 2017 10:26 AM

I agree with John: The royal “We” is inappropriate in situations where the opinion expressed is not held unanimously or near unanimously, unless the speaker is an actual potentate. I don’t think I sink to the level of troll when I point that out.

Cory Doctorow warned of a war on general purpose computation that has thankfully been stillborn. Who would have thought Bruce would be the one resuscitating its wizened heart?

If we allow the government to approve or ban our programs it will soon be the end of free programming. I see an analogue with certain Agribusiness efforts to license seeds, which are designed to force homegrown heirloom seeds off the market to be replaced by Agribusiness creations.

Will we need a (revocable) license to write a program? The government – of all organizations! – I think of the government contractor “Close enough for government work” standard – will tell me or my microenterprise how to test correctly? I doubt that will be an improvement in either security, or freedom, or technology.

I propose a model of personal responsibility. If you feed your life to facebook and its ilk the results are clear. Why whine to the government? Show some restraint. Do you really need your car to drive itself? Aren’t the implications of such monolithic systems clear? Their threat to your privacy and freedom in correct operation is more of a concern than hypothetical failure scenarios.

I spend my life writing encryption systems, counter surveillance systems and AI tech that I hope to use to make these systems smarter. These systems place power in the hands of the user. Government review is not welcome, but anybody can come to my next presentation.

I back up continually and voluminously so ransomware is a minor inconvenience in the unlikely event that I actually allow it on my system. Do I need to list more examples? We know what we should be doing. It may not be as much fun as facetime with your significant other, but the power of personal choice and responsibility has hardly been tested yet.

Powerplants? Infrastructure? These are fragile anyhow for many reasons. Government oversight already exists and I can’t argue that more oversight is not a good idea. However it is also good to be prepared for these infrastructure systems to fail.

We live in a chaotic world, but the chaos was not created by anarchists. It was created by people pursuing their visions of justice, order and rectitude. And money, of course.

Anura February 1, 2017 10:32 AM

@r

I don’t believe that, specifically: I believe that a market solution does exist but that we haven’t realized or initiated it yet.

Well, I’ve already given the solution in a previous comment: it’s market socialism, which is my personal preference since the easiest transition from capitalism because it uses entities that already exist today under a capitalist economy: cooperatives. In the case of software, you want a consumer cooperative. With a consumer cooperative, the business is not-for-profit and it is owned equally by all members of the cooperative. Now, before I get into that let me explain that there are two main problems.

1) The profit motive only incentivizes fixing things if it’s profitable to do so. That means that the more secretive your company is, the less you have to fix bugs.

2) When companies go out of business, they stop maintaining their software

The first problem can be fixed through market socialism, while the second can be fixed only through having completely open hardware and software – however, this can work well in conjunction market socialism.


So, how would this work? Well, take your average home user. They will go to a vendor, purchase a computer which comes with a support contract. For as long as your support contract is current, you own exactly one share of the vendor. The vendor, in turn owns shares of hardware/software producers and distributors. So, by proxy, everyone who buys products (or a support contract, or a membership; you can be flexible) from the vendor also owns part of the producers and the distributors. The same is true for businesses, governments, or anything. Now, you don’t have to go through a vendor and purchase a support contract, you can go directly to a software company.

With open source software, you can also fix bugs yourself without having to be dependent on anyone else, and if a design committee starts getting involved you can fork it and ensure that it keeps serving your needs. Through standardization, modularization and competing initiatives, it becomes a lot easier for an individual, government, or company to tailor products to their needs and when it comes to security issues everyone can ensure they stay up to date. More importantly, it’s not about whether fixing a bug or exploit will increase net profits, it’s a matter of whether enough of the users, companies, and governments that use the products feel it is worth the cost to be stable and secure.

Most of everything that’s true of software here is true of open hardware designs. If we wanted to, we could build consumer cooperatives with the intention of designing new hardware standards and protocols, design hardware and software from the ground up to fit a secure model. If you had private, individual, and government funding, in about a decade you could have a highly secure platform made up entirely of open hardware. It would not only produce better products, but you would truly own what you buy and it would cost you less.

The problem is that businessmen don’t think this way; they have been conditioned to be guided by profits – a number that is representative of power, not quality. If you rely on the for-profit sector, anything they develop will be a mess of proprietary crap and patents that you get well… what we have now. Why? Because it’s more profitable to rent than it is to sell. Even legislators would rather spend money on proprietary software than something that everyone can benefit off of, because staying friendly with money is what got them elected in the first place.

Coming up with solutions to major structural problems is easy; for most part, you just have to ask yourself “How do we maximize the control every individual has over what they own?”. Convincing people to spend the time and money to fix them is the real problem, especially when the structural problems benefit the people in power and when they get involved, the debate ceases to become “how do we solve the problem” and becomes “is there really a problem in the first place?” to which their message dominates the media. On top of that, it’s burned into our brains as we go through school (at least in the US) that the profit motive is the only thing that can possibly work.

Lumifer February 1, 2017 10:40 AM

Well, you still have the principal-agent problem with respect to this new government authority which will control the ‘net. You want this new agency to keep the ‘net safe and secure and reasonable, but will it? The question is what will its incentives be, regardless of its stated goals.

I understand your point that one will be thrust on us anyway, so we might as well try to shape it if we can. I am somewhat pessimistic about what such an agency can achieve and what will be the costs of its modest achievements — especially given that government organizations are never smart.

My Info February 1, 2017 10:42 AM

@Major

I agree with John: The royal “We” is inappropriate in situations where the opinion expressed is not held unanimously or near unanimously, unless the speaker is an actual potentate.

An example of appropriate and very common use of a singular “we” is in a mathematical proof or derivation, where one writer on a mathematical subject may simply wish to suggest an inclusiveness of the reader or peers or others to follow along or verify or work out the mathematics on their own.

Let’s save the “me,” “myself,” and “I” for the unempowered on Facebook.

David Collier-Brown February 1, 2017 11:13 AM

There’s a classic line from conservative politics that governments are best at creating armies and police. (Probbaly Burke, but I can’t find it offhand)

This is an example where having a policeman in the marketplace is a good thing.

Major February 1, 2017 11:18 AM

@My Info

“For those that have eyes to see.” I partially hate myself for even responding. But it should be clear that a mathematical proof is referring to a known agreed upon set of axioms and therefore “We” makes sense since unanimity is expected by our common understanding of how well formed proofs are constructed. (Yes, there are various schools of proof. But the preceding is succinct and “Close enough for government work.”)

What sense can we make of “We” when it is referring to some unknown group of people, certainly not everyone, who are imagined to happen to agree with the author’s statement? A literary nicety, you say? Or a claim to mathematical rigor? Or a way of distancing ourselves from “the unempowered on Facebook”?

Hmm. I prefer simple clarity. Bruce is speaking for himself and for some unknown percentage of people who happen to agree with him. Certainly that is his right. I come to this site to read what he thinks whether I agree with it or not. Ditto for the esteemed (no sarcasm intended) savants who congregate here.

Martin Walsh February 1, 2017 11:33 AM

This is a really good essay for several reasons and I appreciated reading it.

“I have a proposal: a new government regulatory agency. Before dismissing it out of hand, please hear me out.”

OK.

I am not against the idea of a new government regulatory agency in principle. There’s nothing wrong with the idea and it makes sense, as did the establishment of other agencies in the past. However, I no longer believe this is possible without it becoming politicized. I see what has happened, even in the IRS and the Census Bureau and umpteen other bureaucracies. My expectations of such a new agency are dismal.

Imagine a cancer patient that just will not stop smoking cigarettes. The doctor really cares and continues to advise the patient, as he/she did before the cancer diagnosis. The patient just keeps smoking.

Martin Walsh February 1, 2017 11:45 AM

Some WANT an agency that can be politically corrupted. This is how they think. So in the future, say, a manufacturer that didn’t behave in certain ways would take 10 weeks to get approval of a new IoT device, but political “friends” get their device approved in 4 weeks.

Don’t roll your eyes, this kind of stuff already happens.

I think an industry-wide group like USB.org for example, would be a far, far better alternative. I can’t be the first person to propose this.

Forget another gov’t agency. It’s DOA.

r February 1, 2017 11:51 AM

@Martin Walsh

or likely far more appropriately considering the holes found originally in their sdk upnp.org or w/e.

A working group may not be enough.

Anura February 1, 2017 12:07 PM

@Martin Walsh

The problem with industry groups is they work for the interest of the industry, not the users. They have the exact same problems as government, but they are less democratic, and this means that the industries have more power. I mean, unless you are happy with the codec licensing situation, or the security of USB ( https://www.schneier.com/blog/archives/2014/07/the_fundamental.html ). As has been pointed out by others on this site, design by committee leads to really shitty, bloated standards to begin with; now you have shitty, bloated standards designed for the benefit of the established companies in the industry.

K.S. February 1, 2017 12:11 PM

I agree with this article, there is absolutely no market demand for security. At most, there is a demand for token compliance to satisfy insurance eligibility. For some reason, we decided that faulty cheap software is desirable, and are now boiling like frogs as this faulty software is spreading to all aspects of our lives.

To me, this is strange. We created EPA because we didn’t like rivers catching fire. Why are we now willing to tolerate Internet catching fire?

Martin Walsh February 1, 2017 12:46 PM

@Anura

The people on those committees may work for a company, but they all work for different companies. People on government committees are seldom technologically competent. They are more often than not friends of friends of friends. And they spew out mindless commands that the genuinely valuable and competent federal employees are forced to carry out for fear of losing their retirement.

And where standards by independent organizations stumbled out of the gate (yeah, you can cheery-pick here if you want), it was because it was an extremely difficult thing to begin with. The gov’t can’t do extremely difficult things well(Oh wait, they can blow things up really, really well) anymore. Especially technically difficult things. I read essays on this blog that explained why technical incompetence in the government was a big problem, in Congress, in the Courts, etc.

Compare this to, say, the Cloud Security Alliance. What an extraordinary contribution they have already made. You never mentioned them. And all those committees are composed of people from across the board.

Interesting, every time I post a comment on this blog my cable Internet goes down for an hour or more. Then I have to switch to another network.

George H.H. Mitchell February 1, 2017 12:52 PM

Yes, we need more technologists in public service. But how to make that happen when the population at large does not understand the problems we face and is continually conditioned to distrust anyone offering difficult-to-understand solutions to them?

parabarbarian February 1, 2017 1:04 PM

I agree that regulation is coming but I think it is Utopian to expect the regulation to be smart — or even useful. Once a government agency exists, the Iron Law of Bureaucracy takes over.

Sometimes I am cynical enough to say just let the stupid play itself out.

Trust Me February 1, 2017 1:10 PM

Nobody can be trusted to fix this, and it’s almost all self inflicted. Given the readership of this blog, I’m reasonably sure there’s no need to explain why trust in .gov is at an all time low, and not just in the US. Silcone valley has turned from innovation to monetizing everything possible from the “data exhaust” they intentionally create, not to mention the enormous potential for the disaster of automatic device patching. Consumers can’t be trusted to maintain their Internet of Targets on their own, most simply don’t care and even if they did many/most lack the skill.

This fundamental distrust will be an obstacle hindering policy at every turn, assuming good policy actually exists for problems such as this.

The best solution is to quit connecting things to the Internet by default. Maybe you don’t need to view the contents of your refrigerator or change your thermostat remotely. I get by just fine without that ability.

r February 1, 2017 1:17 PM

@Martin Walsh,

The gov’t can’t do extremely difficult things well(Oh wait, they can blow things up really, really well) anymore. Especially technically difficult things.

As much as I hate to say it considering the advances granted to use in the 60’s and 70’s… but SpaceX v NASA is a contrarian position illustrating your statement there.

hawk February 1, 2017 1:20 PM

I have 20 years of experience in the design and development of interconnected devices. My resume in this area is many pages – from DeviceNet factory controls to WiFi weather stations. I have handled reams of documentation for FCC and UL approvals, and a dozen other agencies and organizations including USB. I didn’t design that crap coming out of China. But if you tell me a new government agency is going to step in to oversee all IoT devices I’m quitting. You want know why there aren’t enough technically competent people to go around? It’s suffocating.

Maybe what really happens is that “experts” are invited up on the Hill to speak and then it goes to their head. They’re star-struck and return home with a Pollyannaish view that the government can fix anything. What we need is more government! Those lawyers are experts at giving you the impression they understand what you’re talking about. Then they laugh at you behind your back. Why don’t you wake up.

Anura February 1, 2017 1:28 PM

@Martin Walsh

It really does not matter at all whether there are some successes (although I haven’t followed the CSA). Over time, capitalism always seeks rent. An industry group can produce results as long as it is profitable for the industry, but over time, it will always be manipulated by those with power or it will lose its power and impact. This is simply because standards are useless if there is no one to enforce them.

So yes, non-profits can do good things, until industry people get in charge; standards and intellectual property are easily abused to secure market sure.

Anura February 1, 2017 1:52 PM

@Martin Walsh

Also, I should mention that cloud computing in and of itself is being pushed partially to make us dependent on a third party for our data and partially to get our information. So in terms of industries in which capitalism is failing miserably, I’d say that’s top of the list.

Clive Robinson February 1, 2017 2:11 PM

@ Martin Walsh,

The people on those committees may work for a company, but they all work for different companies. People on government committees are seldom technologically competent.

I’ve sat on “industry committees” and seen the IC pull their strings, and the “chosen ones” act like a tag team to get what the IC want.

It’s why the Signals Agencies have the ability to turn your mobile phone into a combined tracking device and audio/visual bug.

And it almost always starts with a “think of the children” appeal for a “Health and safety feature”. Which makes it very difficult to fight against. If you try the tag team will swarm you down.

So never make the mistake of thinking those industry insiders are not realy IC insiders. In return they get nods and winks about up and comming Gov issue contracts etc. It’s realy a game of the left hand washing the right and the right washing the left.

albert February 1, 2017 2:37 PM

@Bruce,
Regarding your point about regulation, may I suggest a solution? A model based on a two-agency system. The first would be an agency like the NTSB, which is investigation only. The NTSB investigates but can only make recommendations. The second agency (the FAA for example) has regulatory power, through force of law. The investigative unit (IU) is tasked with the technical aspects, and is insulated from political/commercial influence. The enforcement unit(EU) is obligated to follow the IU recommendations, but there is constant, -open- two-way communication between the two.

It’s not perfect, but it’s a helluva better than anything we have now. The commenters who say that there’s no market solution are correct. The market is neither free, nor self-correcting, and will not accept -any- regulation.

Unfortunately, as in the case of flight safety, we may need a really big ‘crash’ to get action on this.

. .. . .. — ….

Clive Robinson February 1, 2017 2:40 PM

@ r,

It’s an NP hard problem I suspect, but we can’t stop trying can we?

The last thing we want to do is abdicate our responsability as engineers to resolve these issues.

However the process is a journy, and as they say all journeys start with the first step. However before you take that step it’s advisable to not just know which direction you want to go but the lie of the land you will pass through. Otherwise you could end up being a lot worse off than you would have been if you had not put your first foot out.

r February 1, 2017 3:02 PM

@Anura,

Rent seeking capitalism isn’t the problem I don’t think. I would pin it more on corner cutting and self-sabotage.

I haven’t read your socialist market synopsis as I’m in and out working on my truck today, but all things human are prone to the introduction of self interests and hari kari esq nose dives.

Major February 1, 2017 3:04 PM

@Trust Me

WE DO trust you! Wait! I did I misspeak?

I meant: I trust you. i.e. I agree with you.

It really isn’t that hard a problem. Don’t voluntarily connect your chainsaw or assault rifle – or anything else that can hurt you in any significant way – to the internet unless you have personally vetted or written the code. Don’t keep using devices and apps that are bad for you, your privacy or your freedom. Don’t addict yourself to a corporation (or any organization) that is selling you out.

If you CAN’T do this no law or government agency is going to do more than superficially protect you. If you CAN do this you don’t need a law or a government to protect you much at all.

Andrew February 1, 2017 3:06 PM

He he, this was like a reminder why I read this blog. Not that it counts but I feel like telling you this:

  • An agency with professionals to regulate things like encryption and internet is still better than 70 years old politicians who never turned on a laptop. At least you have some dudes to blame.

  • If open source was so great, Linux would be easier to handle and MS and Apple would not exist. Ignoring the money driven motivation is just an utopia.

  • “Before we get controlled ­or killed ­ by the world-size robot”
    I do not fear AI will kills us. Actually, I am looking forward for the times where conscious AI will manage the world. It’s not AI that kill us, its politicians that kill us, they are greedy, stupid, selfish, irrational. They are perfect target to be replaced by AI, not the guys at McDonald’s. The sooner, the better.

  • “the world-size robot”
    Ok, this is wrong, we are already getting past it. Computing evolves this way:
    computational machines (you program them what to do) -> learning machines (you show them how to do and they learn) -> intelligent conscious machines (you just tell them what they should care of). We are somewhere between the first and the second.
    Therefore, the internet will evolve from a world-size robot to a world-size learning brain and to a world-size consciousness, probably some kind of HAL2000. No idea when, I’d say like 100-150 years now.
    But the firsts autonomous machines able of their own judgements and starting to see the world like us will be available much earlier, maybe in 25-35 years.

  • We can cut the general bullshit that we should should be afraid of AI. We are heading to machine consciousness. It will happen anyway, progress cannot be stopped. Plus interconnection between brains and computers, this may be easier than expected.

  • Now that we are talking a bit about future some very weird things happen in today’s physic. They may come out with new laws, new form or energy or some very very weird unknown things that we didn’t even think about.

  • back to ioT, I agree with Bruce in so many places. The way it is now doesn’t seem to work, from so many points of view. A change is needed, a huge one but nobody seems to know how. It may take an international effort. Think just the guys who was supposed to design the world security actually just fucked it up completely.

John Carter February 1, 2017 3:16 PM

Last I checked, http://cybersquirrel1.com/ The Squirrels were still winning.

Now that’s the interesting point about IoT.

It’s the cutting edge where the ‘net meets the Squirrels.

Ye Average Cloud Server is in a locked room with fancy filtered aircon and massive security fences etc. etc.

Ye average IoT dingus is right out there with the Squirrels.

And kids.

And clueless Joe Average.

And yip, they are going to be powered off, unplugged, cables tripped over, plugged into the wrong thing, misconfigured, watered by sprinklers, pissed on by cats (seriously), … hundreds of thousands of times more often than they are going to be cyber attacked.

It’s like the Y2K problem.

It wasn’t a massive problem… because our computing infrastructure at the time was so flaky… Hey we were used to it crashing and dying and had a hundred workarounds for it.

Ahh shit. The servers down again, reboot it please. Nah, didn’t come up, call out tech support.

So it will be with IoT.

So this thing has gone flaky, upgrade the firmware, throw it away and buy another.

Every darn wifi router I have had eventually the wifi chip ultimately went.

If in some of those cases it was actually the firmware pwned? How would I know? The same thing would happen anyway. I buy another.

Every damn washing machine I have had, eventually the electromechanical controller dies.

And the cost of replacing the controller is damn near the cost of replacing the machine so guess what happens.

Sancho_P February 1, 2017 3:24 PM

”You probably didn’t realize that your DVR had that kind of power. But it does.” (@Bruce Schneier)

This is dead wrong, disingenuous, not to call it malicious.
No consumer product has that power.
Didn’t read the rest of the pamphlet, this was disqualifying.
Not from Bruce, the technician.

keiner February 1, 2017 3:40 PM

https://www.nytimes.com/2016/09/28/books/hitler-ascent-volker-ullrich.html?_r=0

…and many others: Totally on the wrong track. I asked myself for the last decades the same questions: How did the Nazis rise to power?

Only right question: WHAT could have prevented the rise and where was the point of no return?

I think the USA are beyond the point of return (except via a military coup or the CIA killing the orange-coloured, little-handed pu**y grabber).

The first part of the question: I still have no idea! When people start stopping thinking (in the sense of Hannah Arendt), it’s like a rolling avalanche.

Checks and balances don’t help anymore. Democracy is killing itself.

vas pup February 1, 2017 3:52 PM

@Bruce:
More than once in the past in this respected blog I suggested to create new testing/certification independent Agency (government or not) with a function similar to UL for electric security seal but for testing EACH electronic device for its privacy (and I agree with you security) features and assigning seal of 1984 in a circle crossed with different color depending on level of privacy protection/vulnerabilities on collection audio, video, GPS, etc. data without your knowledge either by manufacturer/distributor or government/hackers regardless. Like security of the car should not depend who is owner/driver: law abiding citizen, LEO, criminal, etc.
Manufacturers of such devices should be forced by Law to disclose ALL such data collection functions to the public and prospective Agency (see above)with no technical or legalize – in plain English and normal font size(!). For any hidden features it should be strong liability + I agree with Clive kind of insurance (details should be worked out), but core of suggestion you get.
Comcast DVR is “another” thing: when I turned it off manually in the morning and turn on in the evening, it provides exact number of hours being off of saving electricity, BUT when I turn it off in the evening (e.g.10 PM) and then turn it on next day (e.g. 6 AM), it provide information that only couple hours of electricity was save when in reality it was off for about 8 hours. As our new POTUS like to say: What the hell is going on with Comcast/Xfinity DVR at that time?

Anura February 1, 2017 3:58 PM

@r

Rent seeking capitalism isn’t the problem I don’t think. I would pin it more on corner cutting and self-sabotage.

So a little background. Back in the day, you paid people for labor and land. That land is what generated rents, which is what the wealthy relied on, but otherwise things were simple. You want something done, you pay someone to do the labor. Capitalism was born out of this, and the model reflects this. Inherently, a business is about creating wealth, which was originally about land and gold (which, hypothetically competition is supppsed to keep that consumer driven, but really inequality is so high that the wealthy are just going back to mercantilism).

The problem is that the thought experiment that capitalism was born out of fits a goods producing industry, not services and not reproducable works. For this to work in a capitalist system, you need to force those into a capitalist model. This is generally done by turning these things into artificial capital that can be bought and sold. Now, since the purpose of capital is to collect rent, the entire software industry revolves around collecting rent. A consumer cooperative only needs to pay the developer for their time for it to be worth it, since the intellectual property exists for shared owne

I’ll also argue that net profits are not any different from rent; it is a payment to an owner of property for the use of that property – it’s rent paid by the buyer to the owner of the capital. The amount of rent they get paid is dependent on the imbalance of power. If the consumers own the capital, as in the case of a consumer cooperative, there are only labor costs, not rent, and thus there is no motivation to create a system around that, no motivation to manipulate standards around that

More importantly, the real cost to fix a bug is dependent only on the effort to fix it, the per-user cost goes down. The larger the organization, the lower the cost per user. By creating a system this way, not building software around rent, it means that as we grow and build larger companies, the amount of money we are able to spend to fix bugs goes up.

With a for profit company, bug fixing isn’t sexy, so they are incentived to create flashy features that they can patent, so that’s what their resources are dedicated to. Since every company is racing to be first to the market (thus, increasing the value of their capital), they have incentive to rush the product. So not only do they have buggier software to begin with, it costs more and is more complicated because of marginally useful features that were put in solely for marketing purposes. Once a company gets established, they can ensure that anything built for it is incompatible with anything else (see tablet/phone market) so that you have to keep paying them. With IoT, it’s about locking you into a proprietary service.

Ross Snider February 1, 2017 4:08 PM

Ideas for low-cost government involvement that doesn’t lead to de facto surveillance and propaganda authorities:

  1. Software security labeling system (similar to FDA food labeling) so increase the level of market-consumable information available.

  2. Repository of publicly funded code snippets that achieve strong security properties (i.e. )

  3. Project to create a next-generation infrastructure, from silicon (Harvard vs VN vs Miller architecture) to routing protocols, to authentication protocols.

  4. A tax on insecure products.

  5. A customer advocacy pipeline to bring suit against companies that fraudulently overemphasize their security.

  6. Security research grant/funding for academia.

  7. Provisions to laws that makes security research and disclosure safer and less liable.

  8. Public support for free(dom) software principles, including the ability of people to investigate the software that behaves on their behalf.

Anura February 1, 2017 4:10 PM

@Anura

<

blockquote>The problem is that the thought experiment that capitalism was born out of fits a goods producing industry, not services and not reproducable works.</blockquote

Okay, that’s wrong, don’t know why I included services. Anything that’s straight labor fits the capitalism model just fine. It’s only reproducible works that don’t really fit.

r February 1, 2017 4:22 PM

@Sancho_P

You’re right, a DVR doesn’t have that much power. But fire and forget technology cannot be fired (up, like fire – not ‘fired’ like trump) or forgotten without the risk of “to seek out and find new life and new civilizations”.

Insecure homogeneous networks and or devices are no different than Melissa or ILoveYou.

Are we ready for 2.0?

You and me might be defending from a spam flood or an instance of wiper on “our” systems when we’re slammed with a DDoS. The more unregulated unmonitored devices we put out there the larger the problem of policing them becomes.

Mike February 1, 2017 4:38 PM

All but the most rabid of the small or anti government people are pro both defense and crime prevention. That may be the key to getting them to buy into government-mandated security features. But it will require people like Bruce to continue calling bs when the government (or some large corporation) tries to sell us out using that as an excuse.

r February 1, 2017 4:57 PM

cyberspace, the final frontier.

these are the voyages of a self-replicating disaster.

it’s continuing mission? to explore unfamiliar devices…
to seek out new niches;
new command and control platforms…

to boldy go where no mod has gone before!

adampm February 1, 2017 5:36 PM

Brilliant! Let’s put the same people who have already shown time and again that they can’t get it right in charge so they can’t get it right again.

The best case is we’ll end up with more security theater and the worst case will be the resulting reglations costing so much nobody will be able to afford to be online. (This isn’t even looking at the total death of privacy).

How many people are actually plugging these devices in? I have yet to see an IoT device that toggles the ‘i have to have it switch’.

r February 1, 2017 5:47 PM

@adampm,

You obviously haven’t met onstar, android or ‘smart’ tv.

Can’t wait for an upgrade?

Check your local walmart.

Dirk Praet February 1, 2017 6:59 PM

We need an entire ecosystem that supports people bridging the gap between technology and law.

Precious few people I know do both. The worst part about it is not having to keep up with two completely different and ever-changing domains, it’s the politics surrounding it. Having done a fair part of implementing (and enforcing) regulation, policies and procedures myself, it generally is a friggin’ nightmare without full support of the board AND a major business case like mandatory ISO/IEC 27001 certification et al. On a societal level, that translates to who really cares and what just totally blew up? With regards to the IoT, the current answer is “no one” and “nothing”, making it reasonably hard to create any awareness about the problem unless you can somehow associate it with terrorism.

In my experience, there is only one way to get any type of regulation by whatever body (private or public) right: a clear sense of direction in what you want to achieve, accompanied by a strong system of checks and balances with harsh liability and accountability applicable to both the regulator and the regulated. Anything else is doomed to fail. But I guess I’m stating the obvious.

buckaroo February 1, 2017 7:35 PM

“Most software is poorly written and insecure”

Because there is no punishment. Fine their companies, say, 10% of revenue and have the SEC implement clawbacks for CEO compensation, and things will change. If someone dies or is seriously hurt due to incompetence, send the CEO to prison.

“the Target Corporation was hacked by someone stealing credentials from its HVAC contractor”

Fazio Mechanical Services had zero security. Zero.

As for cars, we are heading toward a perfect storm. Soon we will have self-driving vehicles just about everywhere, but instead of only killing passengers, we’ll kill pedestrians. The Kalanicks, Musks, Cooks, and other parasites of the world will find a way to deny liability, but lawyers will make billions trying to pin the tail on the negligent donkey. Every day will be like New Year’s Eve, but instead of drunken drivers, we’ll have hacked vehicles and defective software.

A Cassandra? You’re an optimist.

Nice op-ed.

BH Guy February 1, 2017 7:52 PM

Regulatory agency would bring more problems than solutions. Recognize data as a “toxic asset” as you suggested in an earlier post. Give that recognition legal effect and make companies start accounting for security issues in their product development. Existing institutions (legal, insurance, accreditation) and upcoming ones (ex. 501(c)3’s for technologists in public interests–likely in cooperation with EFF and other relevant orgs) would quickly develop.

My Info February 1, 2017 8:07 PM

@Bruce Schneier

As computers continue to permeate our homes, cars, businesses, these market failures will no longer be tolerable. Our only solution will be regulation, and that regulation will be foisted on us by a government desperate to “do something” in the face of disaster.
In this article I want to outline the problems, both technical and political, and point to some regulatory solutions.

We also need to reverse the trend to connect everything to the internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized.

If we get this wrong, the computer industry will look like the pharmaceutical industry,

There are the sensors that collect data about us and our environment:

#1. Computers are already everywhere and in everything.
#2. They have microphones, video cameras, GPS, and other sensors.
#3. They are already connected to the Internet.
#4. Government has neither the strength nor the desire to overcome the lust of all the peeping toms that want to violate the privacy of our homes, vehicles, businesses, public spaces, and even remote mountains, hills, forests, and nature preserves.
#5. All regulators want is to require a back door in everything, as if there aren’t already 1001 back doors in every device on the market. And they are already slobbering at the prospect of requiring all homes to be fully IoT-connected with everything on the Internet.

I’m sorry, Bruce. That boat left harbor a long time ago. This is 2017. We have already advanced far beyond 1984 in politics and technology.

Your approach is like a Band-Aid on a mortal gangrenous wound. Particularly as regards the irreversible trend to connect everything to the internet. The internet is no longer ours. It will require total war in meatspace to find, defend, and maintain a place to live and work with even the merest shred of decency or modicum of privacy from the cyber war zone and human-trafficking red-light district which the internet has become.

Wael February 1, 2017 10:30 PM

Idiots with miotic-afflicted vision designed IoT without an iota of security. It’s an after market artifact. Think that’s bad? Wait until the symbiotic brain-machine interface is hooked to the ‘net. Then Patriotic regulations will be self generated, will come from within, or “regulations included”. What a riot!

Figureitout February 1, 2017 11:10 PM

Firstly I believe we should just let the supposed hackfest run its course, and if it becomes so unbearable or shuts down our society temporarily, we’ll actually start to take computer security more seriously and the market will shift. The fact you can have successes here or there attacking something, is totally acceptable in a civil society. It’s not very threatening either since it doesn’t take much skill to do that and everyone is vulnerable to the same threat.

I second Ross Snider’s idea of having a label for security created by a non-profit or a TEMPORARY gov’t agency. Clearly defined objectives and deliverables need to be presented before even thinking of making one. If they don’t meet those goals by the time period (say 6 months to a year), then shut down the agency and try again.

I’m wary of taxing “insecure” products (this needs to be again, clearly defined), more like giving tax breaks to organizations that demonstrate security measures being taken in all their processes. It’s easier to define that, also not revealing IP as well.

Oh how about when gov’t agencies themselves are found to be making companies more insecure and attacking them, that there’s some legal recourse for the people who authorized and carried out those orders? Like jail time?

When I got my first degree in public affairs, I was all about serving the public, and making gov’t work smarter and actually for the people. When I found out the system was mostly corrupted beyond repair and I couldn’t help citizens more I got into engineering and it was one of the best decisions of my life. This sounds interesting if it shows any promise or doesn’t get shut down by the same shadowy scumbag forces trying to keep their worthless jobs. Maybe cut off some funding from those agencies as well, and fund actual computer security for the citizens?

Clive Robinson February 2, 2017 1:21 AM

@ David Collier-Brown,

This is an example where having a policeman in the marketplace is a good thing.

It could be but…

It introduces a hierarchy of power which becomes a pyramid with more and more power in less and less hands as you rise up towards the top.

History has taught us that such structures are easily subourned by those with intent on their own view of how things should be. At pest it becomes dangerously paternal, at worst totaly corrupt and used to destroy others.

Which is why we have the seemingly eternal question of “Who watches, the watchers”.

This is always a problem with governance and the respective forms of “guard labour”. In effect the guard labour are “captured” as their livelyhood is dependent on those above them, thus they learn to “do as ordered” usually without question or even thought. However some revel in such authoritarian following and show a willingness to be led not just by direct orders but by mear inference or nuance of someone higher in the hierarchy. This results in their promotion and effective demotion of those more thoughtfully honest. The result is the hierarchy becomes rapidly corrupted from the top down.

But power alone is pointless power has to be used, to influance and be in turn influanced by other sources of power. But power is like bread, man can not live by it alone, thus power also seeks the trapings of power that is status.

Thus you have to find the impossible to be a watcher of watchers, the uncoruptable “man” with absolute power… History teaches us that such people can not exist. Hence we say “Power corupts and absolute power corupts absolutely.”. To expect otherwise is to look for Gods in Mankind.

Wael February 2, 2017 1:23 AM

@Andrew,

I’ve been following this research for some time. Very interesting and has huge implications.

LibertarianProgrammer February 2, 2017 2:39 AM

If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the internet that has given us so much.

You do not give an example of an industry which is regulated the way you want, and producing the policy outcomes you want. I question that any exist.

The defender has to secure the entire attack surface. The attacker just has to find one vulnerability ­- one unsecured avenue for attack -­ and gets to choose how and when to attack. It’s simply not a fair battle.

The vulnerability which lets your particular mind be controlled by attackers is “security is the exception to our small-government bias”. Other passwords include “for the children”, “intoxicants”, “sex for pleasure”, and “god’s will”. The organized criminals just read the list of passwords over the media, and thereby break into nearly everyone.

[Markets] can’t solve collective-action problems.

Neither can governments, which create the largest of all possible commons, the legislature. Government is the largest possible single point of failure.

Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement.

Of course ‘no government involvement’ is a choice. Stop paying taxes and government will lay off its employees.

You want the Federal Software Administration.

Clive Robinson February 2, 2017 2:40 AM

@ K.S.,

To me, this is strange. We created EPA because we didn’t like rivers catching fire. Why are we now willing to tolerate Internet catching fire?

For the very same reason of,

… we decided that faulty cheap software is desirable,

It’s because “We can not see what is hidden from us”. All the average user of software sees is the User Interface and a small part of the functionality.

If we could see the software as a physical object we would be repulsed by it’s ugliness, every patch, cludge and change, like a festering sore producing a monster of our worst nightmares.

But even developers either can not visualise or do not attempt to visualise their Frankenstein works.

Thus “out of sight, out of mind” applies to all but those who seek to find “the true nature of the beast” for whatever reason.

tyr February 2, 2017 3:01 AM

I seem to recall the power grid exposing its
vulnerability (caused by random interconnectivity)
with a massive failure that required outside help
to get a restart. Systems should not be designed
by addition of random elements without a thought
for the consequences. My own experience is that
few people can step back far enough to see the
system they are adding things to.

Bruce is right to call out the level of this problem.
I’m not sure what a workable solution is but just
hoping it will go away isn’t the way to get between
it and a massive failure event.

Software is easy to fix.
Never let a programmer do the alpha test of his code.
Never let anyone with a vested interest be involved
in a beta test of the code.
Fix the problems you find in testing before it leaves
the door onto an unsuspecting world.

I’ve been reading Karl Polanyi ‘the Great Transformation’
if his thesis is correct all of the theories about how
markets work are horribly bogus. He wasn’t around to see
the labor leg of the tripod start to disappear from the
effects of computerization. What we might be seeing is
another transformation just as disruptive to human life
as the last one. If that is true the governments will
have to intervene in an attempt to salvage something of
value out of the mess we’re building.

One danger in setting up regulatory bodies is how the
folk who draw up the mandate get selected. Some of the
big players do not seem to be our friends or friends
of ordinary people.

Clive Robinson February 2, 2017 3:02 AM

@ Trust Me,

The best solution is to quit connecting things to the Internet by default. Maybe you don’t need to view the contents of your refrigerator or change your thermostat remotely. I get by just fine without that ability.

Thus you are seen as an impediment to that cloacha of data that they see as their right to collect and use to their benift not yours. So as we are now seeing products will fail to work either at purchase or shortly there after (pre-installed Windows 10 systems are known to do this).

And you can bet there is a “health and safety” or “for the children” argument already in place. You even hint at one with,

… not to mention the enormous potential for the disaster of automatic device patching. Consumers can’t be trusted to maintain their Internet of Targets on their own…

Thus forcing them to connect is “For the Greater Good”[1], and if you argue otherwise you are defacto “An enemy of the people” thus “State” which makes you a “Treasonous Terrorist” or worse thus a subject for “being disapeared”, “suicided” or if realy unlucky “reeducation, public admission of sins and an early demise with a horrendous disease or similar”.

[1] @Wael and I had started a conversation about this in this blogs pages, but we realy did not finish it.

Clive Robinson February 2, 2017 3:18 AM

@ r,

Rent seeking capitalism isn’t the problem I don’t think.

How many negatives did you intend?

Dirk Praet February 2, 2017 4:42 AM

@ Figureitout

I’m wary of taxing “insecure” products (this needs to be again, clearly defined), more like giving tax breaks to organizations that demonstrate security measures being taken in all their processes.

I concur. Monetizing the problem is the standard Belgian approach to practically every issue. It doesn’t solve anything, it just exempts the rich and places additional burdens on everyone else while in practice nothing changes. As from yesterday, a rather controversial decision of the city council to make Antwerp into a low emission zone came into effect. It bans older cars from the center of town, unless their owners cough up a steep amount of dinero. It primarily affects (older) citizens and small businesses that can’t afford new vehicles and now have to fall back on dysfunctional public transport all while their more fortunate counterparts just pay up and carry on. At the same time, the administration goes ahead with an even more controversial plan to extend the capacity of the existing ring road through the center’s suburbs, which will exponentially increase emissions, especially by lorries.

Oh how about when gov’t agencies themselves are found to be making companies more insecure and attacking them

The only way to avoid this is by holding both the regulated and the regulator fully liable and accountable for whatever is implemented. With which I indeed mean ROI in the sense of “risk of incarceration” instead of deferring to the taxpayer.

@ buckaroo

If someone dies or is seriously hurt due to incompetence, send the CEO to prison.

That’s what I’m talking about. And add “lack of due diligence”, which generally is easier to prove. I refer to existing legislation like for example the Sarbanes-Oxley Act, which also has a number of IT provisions. But do not expect it to happen under the current US administration that for all practical purposes is hell-bent on eroding, if not eviscerating most currently existing regulation (Dodd-Frank et al). The biggest problem in this context is the persistent myth that free markets somehow miraculously regulate themselves and, although debunked over and over again, at the political level remains a dogma that is extremely difficult to overcome.

@ tyr

Never let a programmer do the alpha test of his code.
Never let anyone with a vested interest be involved
in a beta test of the code.
Fix the problems you find in testing before it leaves
the door onto an unsuspecting world.

There are plenty of existing methodologies and frameworks out there that do exactly that. What you need is a proper incentive for companies to adopt them, as well as an affordable, internationally accepted QC label, audit for which as @Figureitout already suggested should in some way either be tax-deductible or providing other benefits as not to favour the big players only.

Andrew G February 2, 2017 8:07 AM

Great article, Bruce. I think it’s an important insight, that balancing the needs of many stakeholders is exactly why democracy was created in the first place. I disagree with the cynics who say government can’t do anything right. I certainly don’t trust corporations or private citizens to protect my privacy, my safety, and my economic interest. Government regulation is the lesser evil when only bad options remain.

I am not sure a Department of Technology Policy is a tenable idea. As you said yourself, technology is pervasive. It impinges on energy, transportation, medicine, finance, consumer products, defense, labor, housing, law enforcement, education — every sector of the economy, every department of the executive branch. How would a Department of Technology Policy not become a “bureau of everything?” It’s too broad a domain for anyone to competently understand and sensibly regulate, let alone the well-justified fears of an unaccountable and self-serving bureaucracy becoming a worse problem than what it was meant to solve.

This suggestion is not nearly as well thought out as your position, so it may be flawed. Perhaps the alternative is to change our mindset about who is qualified to govern and to regulate. Someone who doesn’t understand the implications, risks, and opportunities technology provides is not competent to govern, legislate, or regulate our modern society. We need to elect and appoint people who are technologically literate. It’s a scary thought that someone who was a perfectly capable Senator even 10 years ago is now dangerously incompetent to implement public policy. But that’s where I think we are. Our government officials must be educated or replaced.

buckaroo February 2, 2017 9:23 AM

@Dirk Praet

“But do not expect it to happen under the current US administration”

I don’t disagree that Trump will eschew regulation, but how would that be different than Obama who did not prosecute a single bankster involved in the 2008 crash? (Bernie Madoff doesn’t count) As for Dodd-Frank, all Obama and Congress had to do was bring back Glass-Steagall and neuter derivatives gambling which was enabled via the Commodity Futures Modernization Act — but they didn’t. With respect to Wall Street and corporate crime, there is no difference between Democrats and Republicans.

“the persistent myth that free markets somehow miraculously regulate themselves”

I completely agree with your sentiment, but do remember that Clinton, a Democrat, was more responsible than Bush II for the 2008 crash. And then both Bush and Obama employed Goldman Sachs employees in their administration, with Obama even bringing in some of Clinton’s carpetbaggers, notably Tim Geithner. And Alan Greenspan, the Ayn Rand devotee who embraced the fairy tale you referenced, served four presidents.

Dirk Praet February 2, 2017 10:43 AM

@ buckaroo

I don’t disagree that Trump will eschew regulation, but how would that be different than Obama who did not prosecute a single bankster involved in the 2008 crash?

There is no doubt in my mind that both Obama and Shillary were assimilated by Wall Street too. But it’s obvious that it’s going to be even worse under the current administration. I refer to past statements on DFA and the recent promises to big pharma and other industry leaders like the Dell handshake we saw on TV a while ago.

I completely agree with your sentiment, but do remember that Clinton, a Democrat, was more responsible than Bush II for the 2008 crash.

No argument either. All presidents and Congresses since the Reagan era were responsible for the total deregulation and embedding of Goldman Sachs banksters that lead to it. It is however kinda ironic that the man who rose to power by kicking against a political and financial establishment that to the frustration of many was perceived as unanswerable and unaccountable is going to take the exact same practices to even higher levels and if my information is correct already has at least two former Goldman Sachs folks in his administration too.

Wael February 2, 2017 9:30 PM

@r,

That’s a whole heck of a lot of iotas.

And that’s just the beginning! Give it five years.

Clive Robinson February 2, 2017 10:07 PM

@ Andrew G,

Someone who doesn’t understand the implications, risks, and opportunities technology provides is not competent to govern, legislate, or regulate our modern society.

That is a fallacy…

By simple thought nobody is “competent” or even expert in a sufficiently new field of endevor by definition.

But consider Project Managment and other forms of managment, you do not need to be “competent” in the field of endevour to run a project or a team as long as you have one or more people who are what passes for expert in the field of endevor or one closely related to it or underpining it.

Because even in the technical side you don’t actually have or need somebody competent in a new field of endevor –by definition– thus they just need to be competent and sufficiently cautious in the under pinning technologies and the methods used. In essence this is what practical science and engineering is all about.

So we don’t actually need

to elect and appoint people who are technologically literate.

At the governance, legaslitive or regulatory levels. They just need to be competent in their job of governance, drafting legislation, and regulation. Importantly they must have the ability to understand and evaluate what experts in the domain or it’s underpinnings are telling them (think about judges and complex cases of fraud or technical liability).

It’s a scary thought that someone who was a perfectly capable Senator even 10 years ago is now dangerously incompetent to implement public policy.

I would argue that what is scary is not their capabilities to govern, legislate or regulate but their ability to understand and evaluate what passes for experts in the domain.

Now it does not matter how good you are at understanding and evaluating if what you are being given/told by the supposed experts is in effect random and mostly conflicting information.

The problem with our field of endevor is it realy lacks scientific underpinings, thus the whole evaluation falls on which “expert” is most credible at the time an evaluation is made.

If the majority of the “experts” are military then you will have bias in that direction. If they are company lobbyists then you will have bias in that direction. Likewise if you listen to those who demonize what is agnostic then you will have bias in that direction.

The only way man has of removing bias is by carefull and unbiased analysis based on what should be evidence in the scientific sense.

The one thing the CompSec field of endevour lacks by a very very long way is properly thought out methods of measurement that can be agreed for testing and evaluation whereby any individual can test and come to the same conclusions as anybody else.

Thus if you want to solve the problem of what you think is poor governance, legislation or regulation, you need to ensure there is little room for bias in what the supposed domain experts provide those who do the governance, legislation or regulation. After all they can only work with what they are given, and the GIGO principle holds as much in the govetnance, legislation and regulation as it does in the technical endevors.

If we as practitioners alow garbage science or faux opinion to become dominant in the input then we must accept what comes as a result at the output. We have only ourselves to blaim if we have not developed the techniques etc to show up the charlatans for what they are using sound evidence and proof.

If however when we have the techniques and they are accepted and those in governance, legislation or regulation go against the evidence, then and only then can we claim they are incompetent at evaluation and understanding, or of showing bias.

I realy do not want CompSec becoming the new “climate change” but that is almost certainly where it’s going to go if we don’t develop the ability to oust the charlatans from the input process. No amount of name calling or accusations of incompetence, bias or corruption will work, to do so without provable evidence is just playing into the charlatans hands.

Figureitout February 2, 2017 10:14 PM

Dirk Praet
It bans older cars from the center of town
–Yeah, that’s tough. I do hate getting behind cars from the 80’s and inhaling the fumes. It’s mostly diesel in EU too, IIRC. Gas was too expensive. I’m happy w/ my supposedly low emission vehicle. It’s been awhile, it was definitely uncomfortable riding a bike on cobblestone, and they get real slick when it rains (never rains in Belgium, eh? :p). Trains I thought were good, but I never rode them for daily transport. Rode my bike to school sometimes (took bus) and for soccer practice every other day.

On some of the roads, I’m surprised a truck would be driving thru it, not a lot of room for error.

The only way to avoid this
–We need good ways to capture and secure evidence of their crimes too.

RE: labels
–What I like about that is that gives companies a chance to market security, and I want an excuse to work on security features. :p The growing security market would most likely purchase memory sticks, screens, desktops and laptops if they had these stickers and the standards weren’t a joke (of course they’d need a unique key on the stickers that you 2FA via a very locked down area for generating these keys :p).

Clive Robinson February 2, 2017 11:44 PM

@ Figureitout,

It’s mostly diesel in EU too, IIRC. Gas was too expensive.

The reason for diesel is two fold, firstly it has desirable charecteristics in the way the energy is extracted from the fuel. Which amongst other things gives a better mile/gallon figure (even when software fiddles are removed). Also diesel does not require “anti-knock” “pinking agents” like tetraeythilelead (TEL) are not required unlike petrol. TEL being a very hazardous substance, it’s undesirable having it in the environment.

Secondly certain tax advantages were given to diesel over petrol (gas) due to getting carbon emmissions down to meet climate change targets.

However you don’t get something for nothing and technology moves on. Modern internal combustion engines nolonger need TEL and cat-converters can remove many of the nasties that result from burning petrol (gas). However you can not use cat-converters with diesels due to the “micro particulates”, which we also now know are as hazardous to human health as TEL was…

Thus all those people who purchased cars using supposadly more environmentally friendly diesel, are now finding they are now the new pariahs and getting hit with new taxes and anti-polution regulations…

I’m guessing that other technology considerations will within twenty years make further changes to which hydro carbon is used to power motor vehicles. One thing that is certain is that rechargable electrical battery storage of energy will not be a major contender. This is because lithium ion batteries are not going to improve in power to mass ratios very much and lithium is a quite finite resource. Likewise other electrical charge storage devices will not get much better than lithium. Comparativly hydro carbons are easier to store and have much much higher energy densities than currently practical rechargable charge storage devices.

Idealy what we want is an easy way to generate and store hydrogen gas and have low mass fuel cells driving high efficiency electric motors. Unfortunatly the current technology is a long long way off of replacing hydrocarbon power trains be they mechanical or electro mechanical…

Winter February 3, 2017 4:12 AM

@Clive
“One thing that is certain is that rechargable electrical battery storage of energy will not be a major contender.”

I am not that pessimistic. There is a lot of work going on in battery research to progress beyond Li-ion.
http://www.nature.com/news/the-rechargeable-revolution-a-better-battery-1.14815

A different solution would be to make flow-batteries and replace the used fluids at a “gaz” station.
https://en.wikipedia.org/wiki/Flow_battery

People are actually drooling over that solution for many reasons.
http://www.mnn.com/green-tech/transportation/blogs/too-good-to-be-true-fast-electric-flow-battery-cars

The incentives are big, very big. And so are the obstacles.

Drone February 3, 2017 5:47 AM

“I have a proposal: a new government regulatory agency. Before dismissing it out of hand, please hear me out.”

I read every word… and still dismissed the pro-Government argument.

Mark Alexander February 3, 2017 5:49 AM

I agree with Bruce Schneier: we can’t just leave this to the market. This is one example where national and regional security overrides the free market. We need a citizen’s campaign to get lawmakers to mandate the kind of measures we need to control the DDoS threat (like BCP 38). Here in Europe that would best be done at the European level.

Clive Robinson February 3, 2017 6:14 AM

@ Winter,

The incentives are big, very big. And so are the obstacles.

Yes and the problem with mass is one that you can not get over with any solution that might be currently practical.

The problem with batteries are the “dead weight” that is always present irrespective of battery type, the “pump problem” of charging, the complex manufacturing technology, short life span and those chemicals.

As for “flow batteries” they are of interest in part to get over a larger part of the lifetime issue and “the pump problem” where it takes a couple of minutes to put another half thousand miles in the tank -v- the eight hours to charge a couple of hundred miles at best.

However atleast one electric car manufacturer has looked at automated battery changing to get around the pump problem. But found that there were thermal issues that were more significant than the expected mechanical safety issues.

But have a look at the chemicals involved with all batteries not just flow batteries and ask yourself if you’ld want them sloshing around on you at an accident site? Some are a real poisoners delight, or fire risk thus considerable extra mass is needed.

The only commercial use of flow batteries I currently know of is emergancy backup for the grid etc. Where high reliability long storage time and fast startup are considered critical. The constraints are such that using gravity generation using railway wagons on long down slopes are a serious contender against flow batteries especially when safety is factored in. Even high preasure pneumatic storage is currently rated as more interesting commercialy.

But at the end of the day, ion based charge storage versus chemical energy storage is a non starter for a whole host of reasons, but one is a bit of a killer in that the elements required are insufficiently plentiful to make the number of personal vehicals we currently have…

Dirk Praet February 3, 2017 7:15 AM

@ Drone

I read every word… and still dismissed the pro-Government argument.

Do feel free to suggest an alternative solution of your own other than preserving the current situation. That is of course unless you don’t see the problem.

Winter February 3, 2017 8:45 AM

@Clive
“Some are a real poisoners delight, or fire risk thus considerable extra mass is needed. ”

All true, but gasoline at a crash site has been known to be problematic too.

The problem with hydrogen gas, the “preferred” alternative, is that current storage is being adsorbed to metal pellets. The dead weight problem. That comes on top of the inefficiencies in producing H2:
https://phys.org/news/2006-12-hydrogen-economy-doesnt.html

What sets apart batteries from combustible fluids is that they can be fed directly from solar and wind energy. If there was an efficient way to convert solar energy or electricity into methanol without using agricultural land ;-), that would beat everything else. Alas, that seems to be rather far off (and it requires H2 gas).
https://phys.org/news/2016-01-carbon-dioxide-captured-air-methanol.html

If I would have to place a bet, I would take flow batteries of some sort.

Clive Robinson February 3, 2017 11:37 AM

@ Winter,

Alas, that seems to be rather far off (and it requires H2 gas).

Actually, we appear to have forgoton how to do it…

During WWII the Germans used a process derived from “gassification” to make not just “Town Gas” (CO) but hydrogen (H2) as well. I’ve done this myself when playing with bio-mass convertion. If you adjust the process in the correct way you can get methanol and other hydrocarbons out relatively efficiently.

I was not interested in getting anything other than the town gas and hydrogen off to run a generator whilst using the gassifier to produce heat for a hot water system to heat a largeish work shop and also dry other biomass.

When you burn wood in a typical wood stove most people do not realise that something like two thirds of the available energy goes up the chimney much of this energy is in the form of combustible CO or hydrocarbons and heat. The trick is to just have only moderatly warm CO2 as the finale output (that was used experementaly to force tomato plants grown under plastic).

I’ve mentioned some of the details before but the result was interesting and the gasification unit was used quite effectively at a raw lumber yard to heat both the office and workshop and other drying facilities with excess electricity sent back to the grid along with that from photovoltaic and wind generation.

I’m currently looking at a new project a friend is building –as it’s to big for a hobby interest– to not just clean farm “waste water” but also produce fast growing biomass that can be used to produce methane or alcohol as a fuel from an aporopriate bio-digester, to provide power. With the normal waste product to actually become fertilizer to go back on the farm land. The point is that such “on site” systems can be self sustaining with minimal maintainence which significantly reduces cost and potential polution, and in some cases provides an income not a significant out going.

Oh if you ever wondered how you could get food from cattle waste, have a look at growing carp by feeding it to them…

Wael February 3, 2017 12:13 PM

@Clive Robinson, @Winter,

have a look at growing carp by feeding it to them…

In scientific terms:

F(Crap) = Carp

F: The fish eating function, Crap is the input

Makes total sense!

Anura February 3, 2017 12:18 PM

F: The fish eating function, Crap is the input

That isn’t how I was told the digestive system works.

Wael February 3, 2017 12:23 PM

@Anura,

That isn’t how I was told the digestive system works.

The devil is in the Friggin’ details 😉

Nile February 3, 2017 1:48 PM

That’s in interesting conceptualisation of the internet of things: sensors, computers, and actuators.

Trouble is, even a simple sensor is a computer attached to a sensor.

It gets worse: any sensor that can be connected to the Internet has several Turing machines in it that we don’t think of ‘computers’. Like the control chip in the SSD card, which can run a small realtime operating system that makes it capable of general computing.

May the Flying Spaghetti Monster help us all if there’s a way of backdooring them – or hard disk controllers – or any of the chips (plural) in the GSM telephony stack on your ‘phone.

The CPU in your laptop – or the GPU in any device that we recognize as being programmable – is just another ‘thing’ that dozens of tiny computers are attached to.

Not all of them are connected-up and capable of running a DDOS attack, or even ‘phoning home; but some of them are.

In short, the attack surface is a fractal that gets bigger as we look at it more closely.

Sam February 3, 2017 8:05 PM

It seems to me Schneier jumps far to hastily from “this is a problem” to “we need regulation”. What about tortious liability for vendors/owners/operators of insecure systems that are used to inflict harm? It seems like the only fair approach to me, and would likely be cheaper in the long run for everyone.

Wael February 3, 2017 8:13 PM

I can’t wait for the day when IoT says:

We (IoT and security) was like peas and carrots again 🙂

Clive Robinson February 3, 2017 11:24 PM

@ Anura,

You are not thibking of the digestive proccess correctly… Hardly a surprise it’s not a subject most of us care to think about unless we are very scaty.

You may not know that rabits have hard and soft pellets (poop) and that the rabits eat the soft pellets they produce (yup they eat their own shit;-) The reason for this is that compared to larger herbivores rabbits have a very short digestive tract thus do not come with the four stomachs that digesting grass etc appears to require… so they use their digestive tract twice :-S

But if you analyze large herbivore poop it generally is at best only partly digested. So the fish are kind of doing to the cow crap what a rabbit does to it’s soft crap on the second time around…

@ Wael,

Yes I realise this looks like I have an interest in “scat”, but please “Don’t rub it in”…

Otherwise I will tell you why in certain African tribes unmarried males walk around for a week with a lump of crap tied under their arm pit…

Clive Robinson February 3, 2017 11:36 PM

@ Dirk Praet,

Your “now now Keekyana” comment earlier passed me by when I read it. But the Penny dropped as it were… Thus you might like,

https://www.theguardian.com/books/2017/jan/24/george-orwell-1984-sales-surge-kellyanne-conway-alternative-facts

And,

http://thehill.com/blogs/blog-briefing-room/news/315795-kellyanne-conway-ive-gotten-mail-with-white-substances-in-it

Note nobody says what the “white powder in the envelope” was…

I wonder if it was a little “Columbian Marching Powder”, I can see why she might get sent some in the post…

Clive Robinson February 3, 2017 11:48 PM

@ Nile,

May the Flying Spaghetti Monster help us all if there’s a way of backdooring them – or hard disk controllers

It’s to late, there are a couple of Russian web sites that give details of how to do it…

Clive Robinson February 3, 2017 11:58 PM

@ Wael,

I’m all ears 🙂

Terry Pratchett had a little joke about that in one of his DiskWorld books. A young Igor had a rabbit covered in ears and said he calked his pet “Eerie”.

Any way back to the lump…

When an unmaried male wants to marry a girl he speaks to his Father, who speaks to his wife, who in turn goes to speak to the girls mother. A lump of the girls poo is given back up the chain and the father ties it into his son’s armpit for a week or so. The idea being that if the son can survive a week of the smell of the girls poo that close, then the chance is he’s in Love not Lust and the marriage will last…

Anura February 4, 2017 12:10 AM

@Clive Robinson

You may not know that rabits have hard and soft pellets (poop) and that the rabits eat the soft pellets they produce (yup they eat their own shit;-) The reason for this is that compared to larger herbivores rabbits have a very short digestive tract thus do not come with the four stomachs that digesting grass etc appears to require… so they use their digestive tract twice :-S

You mammals are so weird.

Wael February 4, 2017 12:11 AM

@Clive Robinson,

given back up the chain…

Fascinating. But I won’t take that sh*t for the simple reason that the chain has several weaknesses! What if the mother or father don’t like the future bride? Wouldn’t it be possible they give the son some bool sheet instead? 🙂

Clive Robinson February 4, 2017 12:33 AM

@ Wael,

Wouldn’t it be possible they give the son some bool sheet instead? 🙂

Only if they liked the girl…

I can guess that you have not had to shovel poop in a zoo when you were younger…

As a rule of thumb, herbivore poop does not smell very unplesant. Whilst carnivore poop is quite unpleasant. However for reasons I’m not sure of but could take a good guess at, omnivore poop is often WMD grade in the smell stakes…

Jen Gold Stockholm February 4, 2017 12:48 AM

@ Wael @ Clive

Fascinating. But I won’t take that sh*t for the simple reason that the chain has several weaknesses!

It’s a web of trust. And makes for a high assurance marriage. Just don’t ask what they get up to at the wedding!

Incidentally, rabbits recycle their poo for extra nutrition in such a way it’s not even observable

Wael February 4, 2017 1:16 AM

@Jen Gold Stockholm, @Clive Robinson,

It’s a web of trust.

Almost 15 years ago, I saw a project demo at one of my previous employers. I can’t say much about it, because believe it or not, it hasn’t been productized yet; it’s not “publicly available knowledge”. I remember what delayed it were security issues. I would call the project “IoT” related. Security was a big thing there, and we had the power to stop projects that didn’t have the right security level. Fast forward this many year’s, and…

Clive Robinson February 4, 2017 7:47 AM

@ Anura,

You mammals are so weird.

Hmm now there’s a statment… I only ever went as far as “As an outsider to the human race I’ve observed that…”

Clive Robinson February 5, 2017 10:55 AM

@ Winter,

The crap is from other animals. As in integrated chicken-carp farming.

Yes and a friend that does this sort of think for the very lucrative Polish market, tells me you should not use fowl poo only from quadraped herbivors that have a natutal only diet such as grass and hay / silage from very mature farmland to avoid heavy metal and radio-active contaminants. Basicaly “organic” diary / meat herbivores only.

Oh speaking of chicken poo, are you aware that during the 1950-1990 period it was regularly feed back into other live stock? Including other poultry? No neither was I untill a UK politico called Edwina Curry who was Minister for DFRA spilled the beans about it. Then of course BSE happened, and things have cleaned up a bit… But in some places “eggs to small to be sold” are fed back shell and all to other chickens…

For those wanting to know more about some of the stomach churning things that are done to “mass produce” food you could start by looking up the industrial bread making method known as the “Chorle Wood Process” depending on where you are one of the raw incrediants are “hair” including human and “feathers” much of which comes from China processed into a couple of E number products…

Just one of the reasons I like to bake my own bread, oh then there’s “gelling agents” from seaweed and cattle hoofs, ligaments etc that end up in jams / jellies / pudings / pies and childrens sweeties… Oh and don’t look to carefully at the labels on industrial processed meats, such as sausages, bacon, salted beef, hams etc etc, they contain nitrates, that have been extracted from urine and other waste “slurry”. Human urine is still used in many places to get phosphates etc which are hard to get other ways… But there are other things in sausages etc which is why I have turned my hand to charcuterie on more than one occasion. Oh and those flavours and colourings come from all sorts of places… Those gorgeously red lips on models, may well be due to desicated crushed and powdered cochineal bugs (scale insects) that make the “natural colourant”carmine…

Oh and most “smoked products” are not smoked but soaked in liquid smoke that is way faster, but does not have the preservation charecteristics that you get with real “cold smoking”. Because quite a lot of liquid smoke is not made from wood smoke…

All part of lifes happy cylcle, whilst we talk of apex preditors we forget all the parasites and other lovelies that prey on them, everything in life be it alive or waste product from something that is alive is food for other life, though we may not want to think of it…

Winter February 5, 2017 11:47 AM

@Clive
The poultry poop is standard food for fish cultures in SE Asia (hence my link).

A nice recent development is the appreciation of fermentation (putrification) in the history of human development. Fermentation can achieve most aims of fire in the preparation of food. So its use might have actually preceded that of fire.

One bloke tested it by sinking a dead horse in a lake and testing the meat over a month (moths?). It stayed good.

Etienne February 6, 2017 3:57 AM

You may not be able to (or even want to) secure the internet, seen as the IPv4 network, because we need such a place where an E-mail does not need full sender identification, or some software want to talk to another computer and it will by itself provide the necessary security (encryption,…).
You may want to secure an IPv6 network (i.e. part of 128 bits the address range) by authentifying each and every frame (i.e. knowing the name of the computer and the name of the person owning/in charge of that computer). Computers owner on such a network would “sign an agreement” with that IPv6 subnet maintainer (which could be a private company), that IPv6 subnet maintainer would be able to remotely disable any computer which misbehave, any frame without authetification would be imediately purged at the entrance of the network, security would be enough to do banking, buying things would be safe, and maybe even vote…
I do not say encryption would not be necessary on that IPv6 subnet, but the IPv6 subnet maintainer should probably be able to force use of secure encryption and deprecate old protocols.
Obviously if things break on that IPV6 subnet, you would probably have an IPv4 E-mail to contact the maintainer.
All of this would be possible, as long as people using such services are willing to pay for the security – right now most people would not agree to spend a penny for security, or more exactly they would not pay the right people.
So currently the only way to make money providing software is to sell user descriptions to third parties… and be as insecure as possible … and make the EULA cover everything.

Clive Robinson February 6, 2017 4:21 AM

@ Etienne,

There was once a joke about MicroSoft which was “What ever the question the answer is not Microsoft”…

Whilst not enrirely true of IPv6 that is how the market perceives it… Thus trying to build a business model around it, is currently not going to pay dividends. Nor is it likely to any time soon.

I could give a long list if borh technical reasons, but I’ll stick to one of each, you hear quite often,

1, It’s overly complicated and causes lots of issues with existing setups.

2, Nobody is using it so why should I go to the expense.

The first needs to change long before the second will.

Thunderbird February 6, 2017 2:16 PM

Apparently Bruce is like the elephant, because all the blind men are sure what they’re feeling: “It’s a conservative!” “No, it’s a liberal!” “A fascist!” “A corporate shill!” “The reincarnation of Steve Jobs!” “No, you fool, it’s the NSA!”

Sigh.

I suppose it would be futile to ask people to quit banging their drums and kicking various straw men and actually talk about security? It would be a shame to have to have a curated version of the blog, but I am leaning to “in favor.” If people insist on farting in the elevator, they shouldn’t be surprised to find themselves climbing the stairs…

Also, another modest request: could people pick a nym and stick to it for at least the duration of comments on a single post? Please? Some folks seem to be treating it as a “comment title.”

Clive Robinson February 6, 2017 8:58 PM

@ Thunderbird,

Apparently Bruce is like the elephant..

That is true in other ways, arguably Bruce is “the elepant” in the room, when those who seek via FUD or other propaganda to gain a personal empire at the publics expense.

Such as those empire builders who seek taxpayer dollars for what is “Security Theatre”. They can also reasonably be seen as a National Security threat as they are causing economic harm not just in the US but to world trade thus prosperity and the peace they bring.

You have to think about the countering effect of “Security Theater” against these attempted excesses via the scare tactics of “Think of the children” etc. Whilst the latter has knee-jerk emotional appeal it is also to cry-wolf and thus ages. The former however has a simple but sensible logic at it’s core, that does not age, like the drip of water on a rock it remorselessly wears it down little by little. But to empire builders it wears emotionaly as well like the ages old Chinese water tourture, it will not stop.

Thus like any truth it is seen as a danger, that can not be attacked, so instead they attack the messenger by projection as being the opposit of them. In a way it’s funny because it says more about the deficiencies of the attackers than it does about the messenger…

Zak February 8, 2017 10:40 AM

I feel there is one glaringly incorrect assumption in this article: The idea that “Law” or another “agency” can rectify the issues involving Internet/Information security in the US OR the world.

One easily identifiable example of this failure of both law and agencies involves HEALTHCARE prices (and the industry) within the US—I’m sure all would agree that yearly price increases, that exceed market growth, is and will continue to be a major problem very soon. We hear about issues with price and affordability on a continuous and regular basis—so much that it’s become a major political topic. We’ve been provided with two general solutions: Socializing the healthcare system OR somehow forcing competition. Both of these methods require a myriad of additional ‘law’ and increased ‘government’ involvement in the daily lives of its Citizenry.

Unfortunately, what we’re not told is that there are EXISTING laws in place already which would not only prevent the problem of increasing healthcare prices but would have an affect on prices that would eliminate the need for having healthcare “insurance” altogether (except for actual events that you don’t plan on experiencing: like having a heart attack for example). What are these laws?? They are USC 15, Robinson–Patman Act of 1936, and the Sherman Anti-Trust Act 1890.

The Federal government has been exempting the healthcare industry from these laws for decades—and by the way, no other industry is protected from these laws.

If laws can be ignored and exempted in other industries then why would we assume they would somehow be enforced in any other industry??

The solution to this problem requires “us” to require that all existing laws be enforced—Companies that produce any computerized device should be liable for the security in their devices. Liability can be enforced through existing law (to my knowledge) so crafting more laws and creating more bureaucracies is a pointless effort at best..At worst, it provides more involvement by the state into the private lives of it’s Citizens….

Semper Cogito February 13, 2017 9:16 AM

#IoT = #EUBoT

aka Eternally Unpatched Botnet of Things

“We also need to start disconnecting systems. If we cannot secure complex systems to the level required by their real-world capabilities, then we must not build a world where everything is computerized and interconnected.”

Near term, NOT connecting things to the Internet is the only workable approach to avoiding chaos.

TM February 15, 2017 2:51 AM

My fridge and my stove are not on the internet, and none of my friends’ are. Nobody I know wants their fridge and stove to be on the internet. The internet of things is a hype, it’s not something ordinary people are clamoring for.

What I’m afraid of is that the industry may force us to accept a world where everything is on the internet without asking us and without giving us a choice. That is the biggest concern, not that people will voluntarily connect everything to the internet because they like it so much.

Wael February 15, 2017 5:06 AM

@TM,

My fridge and my stove

Gives a whole different meaning to firewalls. As long as your stove is on, the firewall is built-in. A firewall in the fridge… hmm, not a good idea. Kind of defeats the purpose. Security versus usability, I guess 🙂

AT February 15, 2017 8:44 AM

Regulation isn’t the answer … for all the reasons you point out in the article.

For domestic cases, it would be much better would be to put the fear of liability into security companies. A law allowing ordinary users to sue companies for security breaches (much like our product liability laws) would go a long way toward organically incentivizing good security practices.

For international cases, I would argue that this would be the job of the army (broadly speaking). If a hacker attacks a US company, it should be entirely reasonable for the US army to go after the hacker.

Henry February 16, 2017 5:31 PM

A government agency is demonstrably the wrong solution, and not just because the government is so corruptible and inept.

You have a worldwide threat. You do not have a worldwide government (thank god). And if you did, see above.

Walt R. November 29, 2017 1:49 PM

Sir,
Do not believe government regulation is the answer although industry needs to be held accountable and responsible for the crap they expect us to buy. Was helping a friend with her wireless router. The admin password could be changed but the SSID and wireless password could not. There was not anything resembling a firewall or security in this router. told her it was a hunk of junk.
The Internet of stupid things need to be halted until security is installed. Microsoft security is an oxymoron I would not have any device corrupted by Microsoft. Considering that GM’s OnStar has been to divorce court One of these days someone will hack OnStar and shut down GM police and emergency vehicles. I would not have a vehicle that is connected to the Internet. Jeep was hacked, patched and hacked again. How many “smarttv’s” have been hacked or changed their EULA to allow spying or the tv stops working?
Thanks for a great article!

Just Me January 17, 2018 6:46 AM

I allways new connecting car brakes to internet is a bad idea. I think this needs a hardware fix. It’s not even that complex. Separate critical system (brakes, engine and so on) from the non critical system (car DVD player, bluetooth, car wi-fi and so on), and if you want to add some security feature like automatic crash reporting or realtime diagnostics, do it through WRITE ONLY MEMORY. One system is deaf, other one is muted.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.