The political economy of pervasive robots, gixapixel surveillance, and cryptographic attacks


terminator-2-1200x873

Post soundtrack by Brad Fiedel:

So a (Yankee?) guy gave a talk in Australia. He didn’t focus much on cryptographic attacks. For example, someone has dumped a big Excel spreadsheet of insider email addresses. That is not a huge breakthrough, but given the nature of security, it is likely that at least one of the bigwigs will get his email account hacked by brute force.

https://cryptome.org/2016/08/guccifer2-dccc-pelosi-16-0831.zip

https://cryptome.org/2016/08/deep-politics-rev4.pdf

You should take a minute to spin through that PDF, although I will post some of its highlights later. It has clear language such as the following:

In 2006 Booz Allen Hamilton was discovered administering another surveillance program of
probable illegality, something called the SWIFT monitoring program. Earlier, BAH, as it’s
sometimes known, also worked on the illegal Total Information Awareness Program. And that’s
not all. As we’ll see later, it’s also administering a previously-unknown FBI mass-surveillance
program. (Those reading this white paper are the very first to learn of it.)
What BAH personifies, then, is Big-Brother-for-profit.

The PDF links to sites like:

https://www.aclu.org/news/booz-allens-extensive-ties-government-raise-more-questions-about-swift-surveillance-program

Even if you hate the ACLU on principle, hold your nose and read the PDF, because it deserves thoughtful analysis.


Similarly, Schneier believes that the Internet makes things very easy for attackers and very hard for defenders. And apparently, he takes money from the USA gov, because he wants more regulation:

https://www.schneier.com/blog/archives/2016/11/regulation_of_t.html

The guy who gave the talk did not teach the audience how to hack anything. Ultimately, his conclusion was very stupid. He delivered a lot of silly pictures, but he pulled a stupid cop-out at the moment of truth.

Some of his most insightful points – along with the cop-out – can be summarized as follows:


While I don’t think anyone in the Army is cynical enough to say it, there are institutional incentives to permanent warfare.

An army that can practice is much better than one that can only train. Its leaders, tactics, and technologies are tested under real field conditions.

We just built ourselves a powerful apparatus for social control with no sense of purpose or consensus about shared values.

Do we want to be safe? Do we want to be free? Do we want to hear valuable news and offers?

The tech industry slaps this stuff together in the expectation that the social implications will take care of themselves. We move fast and break things.

Today, having built the greatest apparatus for surveillance in history, we’re slow to acknowledge that it might present some kind of threat.

We would much rather work on the next wave of technology:

The real answer to who will command the robot armies is: Whoever wants it the most.

And right now we don’t want it. Because taking command would mean taking responsibility.

I disagree with his claim. I have zero chance of controlling the world’s credit card system. Even if I want it more than George Soros, Soros has a much better chance of controlling it. As I will argue in a later post, Booz Allen Hamilton has a much better chance of grabbing control, even if they don’t “want it the most.”

So the conclusion to the lecture is a stupid cop-out. Someone with power will make a grab to use Internet tech to get MORE power. But it’s going to depend on actual violent power, not “willingness to take responsibility.”

He fails to note Sun-Tzu’s adage that no nation has EVER benefited from prolonged warfare. Thus if the USA’s “forever war” goes on long enough, there will be no more USA to feed it money and warm bodies.

The talk is as follows:
Who Will Command the Robot Armies?
The Military
…—the military.

This is the Predator, the forerunner of today’s aerial drones. Those things under its wing are Hellfire missiles.

These two weapons are the chocolate and peanut butter of robot warfare. In 2001, CIA agents got tired of looking at Osama Bin Laden through the camera of a surveillance drone, and figured out they could strap some missiles to the thing. And now we can’t build these things fast enough.

We’re now several generations in to this technology, and soldiers now have smaller, portable UAVs they can throw like a paper airplane. You launch them in the field, and they buzz around and give you a safe way to do reconaissance.

There are also portable UAVs with explosives in their nose, so you can fire them out of a tube and then direct them against a target—a group of soldiers, an orphanage, or a bunker–and make them perform a kamikaze attack.

The Army has been developing unmanned vehicles that work on land, little tanks that roll around with a gun on top, with a wire attached for control, like the cheap remote-controlled toys you used to get at Christmas.

Here you see a demo of a valiant robot dragging a wounded soldier to safety.

The Russians have their own versions of these things,…

Not all these robots are intended as weapons. The Army is trying to automate transportation, …
So progress with autonomous and automated systems in the military is rapid.

The obvious question as these systems improve is whether there will ever be a moment when machines are allowed to decide to kill people without human intervention.

I think there’s a helpful analogy here with the Space Shuttle.

The Space Shuttle was an almost entirely automated spacecraft. The only thing on it that was not automated was button that dropped the landing gear. The system was engineered that way on purpose, so that the Shuttle had to have a crew.

The spacecraft could perform almost an entire mission solo, but it would not be able to put its wheels down.

When the Russians built their shuttle clone, they removed this human point of control. The only flight the Buran ever made was done on autopilot, with no people aboard.

I think we’ll see a similar evolution in autonomous weapons. They will evolve to a point to where they are fully capable of finding and killing their targets, but the designers will keep a single point of control.

And then someone will remove that point of control.

Last week I had a whole elaborate argument about how that could happen under a Clinton Administration. But today I don’t need it.

It’s important to talk about the political dynamic driving the development of military robots.

In the United States, we’ve just entered the sixteenth year of a state of emergency. It has been renewed annually since 2001.

It has become common political rhetoric in America to say that ‘we’re at war’, even though being ‘at war’ means something vastly different for Americans than, say, Syrians.


The goal of military automation is to make American soldiers less vulnerable. This laudable goal also serves a cynical purpose.

Wounded veterans are a valuable commodity in American politics, but we can’t produce them in large numbers without facing a backlash.

Letting robots do more of the fighting makes it possible to engage in low-level wars for decades at a time, without creating political pressure for peace.

As it becomes harder to inflict casualties on Western armies, their opponents turn to local civilian targets. These are the real victims of terrorism; people who rarely make the news but suffer immensely from the state of permanent warfare.

Once in a long while, a terror group is able to successfully mount an attack in the West. When this happens, we panic.

The inevitable hardening of our policy fuels a dynamic of grievance and revenge that keeps the cycle going.

While I don’t think anyone in the Army is cynical enough to say it, there are institutional incentives to permanent warfare.

An army that can practice is much better than one that can only train. Its leaders, tactics, and technologies are tested under real field conditions. And in ‘wartime’, cutting military budgets becomes politically impossible.

These remote, imbalanced wars also allow us to experiment with surveillance and automation technologies that would never pass ethical muster back home.

And as we’ll see, a lot of them make it back home anyway.

It’s worth remarking how odd it is to have a North American superpower policing remote areas of Pakistan or Yemen with flying robots.

Imagine if Indonesia were flying drones over northern Australia, to monitor whether anyone there was saying bad things about Muslims there.

Half of Queensland would be in flames, and everyone in this room would be on a warship about to land in Jakarta.
The Police
My second contender for who will command the robot armies is the police.

Technologies that we develop to fight our distant wars get brought back, or leak back, into civilian life back home.

The most visible domestic effect of America’s foreign wars has been the quantity of military surplus equipment that ends up being given to police.

Local police departments around the country (and here in Australia) have armored vehicles, military rifles, night vision googles and other advanced equipment.

After the Dallas police massacre, the shooter was finally killed by a remotely-controled bomb disposal robot initially designed for use by the military in Iraq.

I remember how surprising it was after the Boston marathon bombings to see the Boston police emerge dressed like the bad guys from a low-budget sci-fi thriller. They went full Rambo, showing up wth armored personnel carriers and tanks.

Still, cops will be cops. Though they shut down all of downtown Boston, the police did make sure the donut shops stayed open.

The militarization of our police extends to their behavior, and the way they interact with their fellow citizens.

Many of our police officers are veterans. Their experience in foreign wars colors the attitudes and tactics they adopt back home.

Less visible, but just as important, are the surveillance technologies that make it back into civilian life.

These include drones with gigapixel cameras that can conduct surveillance over entire cities, and whose software can follow dozens of vehicles and pedestrians automatically.

The United States Border Patrol has become an enthusiastic (albeit not very effective) adopter of unmanned aerial vehicles.

These are also being used here in Australia, along with unmanned marine vehicles, to intercept refugees arriving by sea.

Another gift of the Iraq war is the Stingray, a fake base station that hijacks cell phone traffic, and is now being used rather furtively by police departments across the United States.

When we talk about government surveillance, there’s tendency to fixate on national agencies like the NSA or CIA. These are big, capable bureaucracies, and they certainly do a lot of spying.

But these agencies have an internal culture of following rules (even when the rules are secret) and an institutional committment to a certain kind of legality. They’re staffed by career professionals.

None of these protections apply when you’re dealing with local law enforcement. I trust the NSA and CIA to not overstep their authority much more than I trust some deputy sherrif in East Dillweed, Arizona.

Unfortunately, local police are getting access to some very advanced technology.

So for example San Diego cops are swabbing people for DNA without their consent, and taking photos for use in a massive face recognition database. Half the American population now has their face in such a database.

And the FBI is working on a powerful ‘next-generation’ identification system that will be broadly available to other government agencies, with minimal controls.
The Internet of Things

But here the talk is getting grim! Let’s remember that not all robots are out to kill us, or monitor us.

There are all kinds of robots that simply want to help us and live with us in our homes, and make us happy.

Let’s talk about those friendly robots for a while.

Here is the Internet connected kettle! There was a fun bit of drama with this just a couple of weeks ago, when the data scientist Mark Rittman spent eleven hours trying to connect it to his automated home.

The kettle initially grabbed an IP address and tried to hide:

3 hrs later and still no tea. Mandatory recalibration caused wifi base station reset, now port-scanning network to find where kettle is now.

Then there was a postmodern moment when the attention Rittman’s ordeal was getting on Twitter started causing his home system to go haywire:

Now the Hadoop cluster in the garage is going nuts due to RT to @internetofshit, saturating network + blocking MQTT integration with Amazon Echo

Finally, after 11 hours, Rittman was able to get everything working and posted this triumphal tweet:

Well the kettle is back online and responding to voice control, but now we’re eating dinner in the dark while the lights download a firmware update.

The people who design these devices don’t think about how they are supposed to peacefully coexist in a world full of other smart objects.

This raises the question of who will step up and figure out how to make the Internet of Things work together as a cohesive whole.
Evil Hackers

Of course, the answer is hackers!
… map of denial-of-service attacks against a major DNS provider, that knocked a lot of big-name sites offline in the United States.

This particular botnet used webcams with hard-coded passwords. But there is no shortage of vulnerable devices to choose from.

In August, researchers published a remote attack against a smart lightbulb protocol…
In their proof of concept, the authors were able to infect smart light bulbs in a chain reaction, using a drive-by car or a drone for the initial hack.

The bulbs can be permanently disabled, or made to put out a loud radio signal that will disrupt wifi anywhere nearby.

Since these devices can’t be trusted to talk to the Internet by themselves, one solution is to have a master device that polices net access for all the others, a kind of robot butler to keep an eye on the staff.

Google recently introduced Google Home, … It sits in your house, listens through always-on microphones,…

So maybe it’s Google who will command the robot armies! They have the security expertise to build such a device and the programming ability to make it useful.

Yet Google already controls our online life to a troubling degree. Here is a company that runs your search engine, web browser, manages your email, DNS, phone operating system, and now your phone itself.

Moreover, Doubleclick and Google Analytics tell Google about your activity across every inch of the web.

Now this company wants to put an always-on connected microphone in every room of your home.

What could go wrong?

For examples of failure, always turn to Yahoo.

On the same day that Google announced Google Home, Reuters revealed that Yahoo had secretly installed software in 2014 to search though all incoming email at the request of the US government.

What was especially alarming was the news that Yahoo had done this behind the backs of its own security team.

This tells us that whatever safeguards Google puts in its always-on home microphone will not protect us from abuses by government, even if everyone at Google security were prepared to resign in protest.

And that’s a real problem.

Over the last two decades, the government’s ability to spy on its citizens has grown immeasurably.

Mostly this is due to technology transfer from the commercial Internet, whose economic model is mass surveillance. Techniques and software that work in the marketplace are quickly adopted by intelligence agencies worldwide.

President Obama has been fairly sparing in his use of this power. I say this not to praise him, but actually to condemn him. His relative restraint, and his administration’s obsession with secrecy, have masked the full extent of power that is available to the executive branch.

Now that power is being passed on to a new President, and we are going to learn all about what it can do.
Amazon

So Google is out! …

Maybe Amazon can command the robot armies? …

Amazon has been trying to achieve this perfect robotic workforce for years. Many of the people who work in its warehouses are seasonal hires, who don’t get even the limited benefits and job security of the regular warehouse staff.

Amazon hires such workers through a subsidiary called Integrity. If you know anything about American business culture, you’ll know that a company called “Integrity” can only be pure evil.

Working indirectly for Amazon like this is an exercise in precariousness. Integrity employees don’t know from day to day whether they still have a job. Sometimes their key card is simply turned off.

A lot of what we consider high-tech startups work by repackaging low-wage labor.

Take Blue Apron, one of a thousand “box of raw food” startups that have popped in recent years. Blue Apron lets you cook a meal without having to decide on a recipe or shop for ingredients. It’s kind of like a sous-chef simulator.

Blue Apron relies on a poorly-trained, low wage workforce to assemble and deliver these boxes. They’ve had repeated problems with workplace violence and safety at their Richmond facility.

It’s odd that this human labor is so invisible.

Wealthy consumers in the West have become enamored with “artisanal” products. We love to hear how our organic pork is raised, or what hopes and dreams live inside the heart of the baker who shapes our rustic loaves.

But we’re not as interested in finding out who assembled our laptop.

In fact, a big selling point of online services is not having to deal with other human beings. We never engage with the pickers in an Amazon warehouse that assemble our magical delivery. And I will never learn who is chopping vegetables for my JuiceBro packet.

So is labor something laudable or not?

Our software systems treat labor as a completely fungible commodity, and workers as interchangeable cogs. We try to put a nice spin on this frightening view of labor by calling it the “gig economy”.

The gig economy disguises precariousness as empowerment. You can pick your own hours, work only as much as you want, and set your own schedule.

For professionals, that kind of freedom is attractive. For people in low-wage jobs, it’s a disaster. A job has predictable hours, predictable pay, and confers stability and social standing.

The gig economy takes all that away. You work whatever hours are available, with no guarantee that there will be more work tomorrow.

I do give Amazon credit for one thing: their white-collar employees are just as miserable as their factory staff. They don’t discriminate.
As we automate more of middle management, we are moving towards a world of scriptable people—human beings whose labor is controlled by an algorithm or API.

Amazon has gone further than anyone else in this direction with Mechanical Turk.

Mechanical Turk is named after an 18th-century device that purported to be a chess-playing automaton. In reality, it had a secret compartment where a human player could squeeze himself in unseen.

So the service is literally named after a box that people squeezed themselves into to pretend to be a machine. And it has that troubling, Orientalist angle to boot.

A fascinating thing about Mechanical Turk is how heavily it’s used for social science research, including research into low-wage labor.

Social scientists love having access to a broad set of survey-takers, but don’t think about the implications (or ethics) of using these scriptable people, who spend their entire workday filling out similar surveys.

A lot of our social science is being conducted by having these people we treat like robots fill out surveys.

Let me talk briefly about the robots inside us.

I have a particular fascination with chatbots, … fun, the chatbot experience really isn’t. It’s companies trying to hijack our sociability with computer software, in order to manipulate us more effectively. And as the software gets better, these interactions will start to take a social and cognitive toll.
Social Media

Sometimes you don’t even notice when you’re acting like a robot.

This is a picture of my cat, Holly.

My roommate once called me over all excited to show me that he’d taught Holly to fetch.

I watched her walk up to him with a toy in her mouth and drop it at his feet. He picked it up and threw it, and she ran and brought it back several times until she had had enough.

He beamed at me. “She does this a couple of times a day.”

He was about to go back to whatever complicated coding task the cat had interrupted, but something about the situation felt strange. We thought for a moment, our combined human brains trying to work out the implications.

My roommate hadn’t trained the cat to do anything.

She had trained him to be her cat toy.

I think of this whenever I read about Facebook. Facebook tells us that by liking and sharing stuff on social media, we can train their algorithm to better understand what we find relevant, and improve it for ourselves and everyone else.

Here, for example, is a screenshot from a live feed of the war in Syria. People are reacting to it on Facebook as they watch, and their reaction emoji scroll from right to left. It’s unsettling.

What Facebook is really doing is training us to click more. Every click means money, so the site shows us whatever it has to to to maximize those clicks.

The result can be tragic. With no ethical brake to the game, and no penalty for disinformation, outright lies and hatred can spread unchecked. Whatever Facebook needs to put on your screen for you to click is what you will see.

In the recent US election, Facebook was the primary news source for 44% of people, over half of whom used it as their only news source.

Voters in our last election who had a ‘red state’ profile saw absolutely outrageous stories on their newsfeed. There was a cottage industry in Macedonia writing fake stories that would get boosted by Facebook’s algorithm. There were no consequences to this, other than electing an orange monster.

But Facebook insists it’s a tech company, not a media company.
Chad and Brad

My final nominees for commanders of the robot armies are Chad and Brad.

Chad and Brad are not specific people. They’re my mental shorthand for developers who are just trying to crush out some code out on deadline, and don’t think about the wider consequences of their actions.

The principle of charity says that we should assume Chad and Brad are not trying to fuck up intentionally, or in such awful ways.

Consider Pokémon Go, which when it was initially released required full access to your Gmail account. To play America’s most popular game, you practically had to give it power of attorney.

And first action Pokémon Go had you take was to photograph the inside of your house.

You might think this was a brilliant conspiracy to seize control of millions of Gmail accounts, or harvest a trove of private photographs.

But it was only Chad and Brad, not thinking things through.

ProPublica recently discovered that you could target housing and employment ads on Facebook based on ‘ethnic affinity’, a proxy for race.

It’s hard to express how illegal this is in the United States. The entire civil rights movement happened to outlaw this kind of discrimination.

My theory is that every Facebook lawyer who saw this interface had a fatal heart attack. And when no one registered any objection, Chad and Brad shipped it.

Here’s an example from Andy Freeland of Uber’s flat-fare zone in Los Angeles.

You can see that the boundary of this zone follows racial divisions. If you live in a black part of LA, you’re out of luck with Uber. Whoever designed this feature probably just sorted by ZIP code and picked a contiguous area above an income threshold. But the results are discriminatory.

What makes Chad and Brad a potent force is that you rarely see their thoughtlessness so clearly. People are alert to racial discrimination, so sometimes we catch it. But there’s a lot more we don’t catch, and modern machine learning techniques make it hard to audit systems for carelessness or compliance.

Here is a similar map of Uber’s flat-fare zone in Chicago. If you know the city, you’ll notice it’s got an odd shape, and excludes the predominantly black south side of the city, south of the diagonal line. I’ve shown the actual Chicago city limits on the right, so you can compare.

Or consider this screenshot from Facebook, taken last night. Facebook added a nice little feature that says ‘you have new elected representatives, click here to find out who they are!

When you do, it asks you for your street address. So to find out that Trump got elected, I have to give a service that knows everything about me except my address (and who has a future member of Trump’s cabinet on its board) the one piece of information that it lacks.

This is just the kind of sloppy coding we see every day, but it plays out at really high stakes.

The Chads and Brads of this world control algorithms that decide if you get a loan, if you’re more likely to be on a watch list, and what kind of news you see.

For more on this topic, I highly recommend Cathy O’Neill’s new book, Weapons of Math Destruction.
Conclusion
So who will command the robot armies?

Is it the army? The police?

Nefarious hackers? Google, or Amazon?

Some tired coder who just can’t be bothered?

Facebook, or Twitter?

Brands?
I wanted to end this talk on a note of hope. I wanted to say that ultimately who commands the robot armies will be up to us.

That it will be some version of “we the people” that takes these tools and uses them with the care they require.

But it just isn’t true.

The real answer to who will command the robot armies is: Whoever wants it the most.

And right now we don’t want it. Because taking command would mean taking responsibility.

Facebook says it’s not their fault what people share on the site, even if it’s completely fabricated, and helps decide an election.

Twitter says there’s nothing they can do about vicious racists using the site as a political weapon. Their hands are tied!

Uber says they can’t fight market forces or regulate people’s right to drive for below minimum wage.

Amazon says they can’t pay their employees a living wage because they aren’t even technically employees.

And everyone agrees that the answer to these problems is not regulation, but new and better technologies, and more automation.

Nobody wants the responsibility; everybody wants the control.

Instead of accountability, all we can think of is the next wave of technology that will make everything better. Rockets, robots, and self-driving cars.

We innovated ourselves into this mess, and we’ll innovate our way out of it.

Eventually, our technology will get so advanced that we can build sentient machines, and they will help us create (somehow) a model society.

Getting there is just a question of being sufficiently clever.
On my way to this conference from Europe, I stopped in Dubai and Singapore to break the journey up a little bit.

I didn’t think about the symbolism of these places, or how they related to this talk.

But as I walked around, the symbolism of both places was hard to ignore.

Dubai, of course, is a brand new city that has grown up in an empty desert. It’s like a Las Vegas without any fun, but with much better Indian food.

In Dubai, the gig economy has been taken to its logical conclusion. Labor is fungible, anonymous, and politically inert. Workers serve at the whim of the employer, and are sent back to their home countries when they’re not wanted.

There are different castes of foreign workers—western expats lead a fairy cozy life, while South Indian laborers and Filipino nannies have it rough.

But no matter what you do, you can never hope to be a citizen.

Across all the Gulf states there is a permanent underclass of indentured laborers with no effective legal rights. It’s the closest thing the developed world has to slavery.

Singapore, where I made my second stop, is a different kind of animal.

Unlike Dubai, Singapore is an integrated multi-ethnic society where prosperity is widely shared, and corruption is practically nonexistent.

It may be the tastiest police state in the world.

On arrival there, you get a little card telling you you’ll be killed for drug smuggling. Curiously, they only give it to you once you’re already over the border.

But the point is made. Don’t mess with Singapore.

Singaporeans have traded a great deal of their political and social freedom for safety and prosperity. The country is one of the most invasive surveillance states in the world, and it’s also a clean, prosperous city with a strong social safety net.

The trade-off is one many people seem happy with. While Dubai is morally odious, I feel ambivalent about Singapore. It’s a place that makes me question my assumptions about surveillance and social control.
What both these places have in common is that they had some kind of plan. As Walter Sobchak put it, say what you will about social control, at least it’s an ethos.

The founders of these cities pursued clear goals and made conscious trade-offs. They used modern technology to work towards those goals, not just out of a love of novelty.

We, on the other hand, didn’t plan a thing.

We just built ourselves a powerful apparatus for social control with no sense of purpose or consensus about shared values.

Do we want to be safe? Do we want to be free? Do we want to hear valuable news and offers?

The tech industry slaps this stuff together in the expectation that the social implications will take care of themselves. We move fast and break things.

Today, having built the greatest apparatus for surveillance in history, we’re slow to acknowledge that it might present some kind of threat.

We would much rather work on the next wave of technology: a smart home assistant in every home, self-driving cars, and rockets to Mars.

We have goals in the long term: to cure illness, end death, fix climate change, colonize the solar system, create universal prosperity, reinvent cities, and become beings of pure energy.

But we have no plan about how to get there in the medium term, other than “let’s build things and see what happens.”

What we need to do is grow up, and quickly.

Like every kid knows, you have to clean up your old mess before you can play with the new toys. We have made a colossal mess, and don’t have much time in which to fix it.

And we owe it to these poor robots! They depend on us, they’re trying to serve us, and they’re capable of a lot of good. All they require from us is the leadership and a willingness to take responsibility. We can’t go back to the world we had before we built them.

http://idlewords.com/talks/robot_armies.htm

Advertisements
This entry was posted in political economy. Bookmark the permalink.