Rick Falkvinge's Blog, page 6
May 3, 2017
De-spamming service “Unroll” selling your inbox to Uber shows the importance of information hygiene, yet again
Privacy: It was a perfect service: sorting your mail and not just removing all spam for you, but also unsubscribing you from all of that spam garbage going forward. It kept your inbox perfectly clean. But behind the curtains, it also sold your inbox to the highest bidder.
Sometimes, you’re maliciously signed up to tens of thousands of mailing lists because somebody was annoyed with something you said, somewhere. The cost of doing so is low and it causes a ton of headache as you’re getting hundreds of spam per minute. Fortunately, most of those are double-opt-in confirmation mails — “click this link to confirm the subscription” — but maybe five percent are not, and those malicious signups will continue to clobber your inbox with noise.
Enter Unroll, which was the solution for this scenario: you gave it access to your mailbox, and it would not only detect and remove such unwanted spam, but also unsubscribe you from those tens of thousands of malicious subscriptions. Except, as it turns out, they also kept every single one of your mails, including those with passwords and other sensitive information, and sold them to the highest bidder.
It was just a short passage in an otherwise fascinating portrait of the Uber CEO made by New York Times:
So, the service Unroll was bought by Slice Intelligence. This is the first red flag: even if the service you signed up for were honest, their buyer may not be. (According to a quoted person below, Slice Intelligence bought Unroll specifically because they had access to tons of private mailboxes.)
This highlights the importance of information hygiene.
Information hygiene means that you’re aware not of what somebody claims to do with your data, but that you understand what they are able to do. For example, if a service promises to sort your email for you, then it necessarily must also be able to read all that email, for the action of sorting requires observation – and consequently, they are also able to sell your private mails to others. This is an ability they hold regardless of what they promise to do, or more relevantly, appear to promise to do.
The act of sorting requires observation. Therefore, any service sorting your data must also be able to read all your data.
In a blog post about the revelation that they sell inbox data, Unroll CEO states that “it was heartbreaking to see that some of our users were upset to learn about how we monetize our free service”. The comments are, predictably, furious: the top comment states that “this is a one-strike-I-leave-the-service kind of thing”.
That same top comment also states that it’s a big deal to give somebody access to their inbox. Doing so should always, always, be done with the awareness that they will at least read all of it (if nothing else, to determine which mails to read closer, to perform the promised service), and that any information, once read, cannot be unread – but can be processed, aggregated, sold, et cetera.
If you are providing your inbox to somebody else, and want privacy, you need to encrypt your mails, just like you’re encrypting your Internet connection to prevent others from eavesdropping on it.
At Hacker News, a person named Karl Katzke elaborates further:
I worked for a company that nearly acquired unroll.me. At the time, which was over three years ago, they had kept a copy of every single email of yours that you sent or received while a part of their service. Those emails were kept in a series of poorly secured S3 buckets. A large part of Slice buying unroll.me was for access to those email archives. Specifically, they wanted to look for keyword trends and for receipts from online purchases.
The founders of unroll.me were pretty dishonest, which is a large part of why the company I worked for declined to purchase the company. As an example, one of the problems was how the founders had valued and then diluted equity shares that employees held. To make a long story short, there weren’t any circumstances in which employees who held options or an equity stake would see any money.
I hope you weren’t emailed any legal documents or passwords written in the clear.
Take a moment to absorb that, and add to the fact that they had a useful service that many subscribed to, combined with that sloppiness (not to say bordering on malice) with people’s private data – and sprinkle the CEO’s “heartbrokedness” when users learned how they made money on top.
Last but not least, Unroll tries to deflect blame here by saying they’re only selling “anonymized” data. It must be remembered, that anonymization is hard. As in, really really really hard. Most data can be de-anonymized; strong anonymization is basically as hard as strong encryption, and most people doing anonymization are happy amateurs who do not understand the scope and difficulty of the task.
Privacy remains your own responsibility.
Syndicated Article
This article has previously been published at Private Internet Access.
(This is a post from Falkvinge on Liberty, obtained via RSS at this feed.)

April 30, 2017
Blockstream having patents in Segwit makes all the weird pieces of the last three years fall perfectly into place

Activism: Based on Blockstream’s behavior in the Bitcoin community, I have become absolutely certain that Segwit contains patents that Blockstream and/or their owners have planned to use offensively. I base this not on having read the actual patents, for they can be kept secret for quite some time; I base this on observing Blockstream’s behavior, and having seen the exact same behavior many times before in the past 20 years from entities that all went bankrupt.
In a previous part of my career, I was making telecom standards. This meant meeting with lots of representatives from other companies somewhere on the globe once a month and negotiating what would go into the standard that we would all later follow.
I was a representative of Microsoft. I would meet with people from Nokia, Ericsson, AT&T, and many other corporate names you’d recognize instantly, in small groups to negotiate standards going forward.
One thing that was quite clear in these negotiation was that everybody was trying to get as much as possible of their own patent portfolio into the industry standard, while still trying to maintain a façade of arguing purely on technical merits. Some were good at it. Some were not very good at it at all.
One of the dead-sure telltale signs of the latter was that somebody would argue that feature X should use mechanism Y (where they had undisclosed patent encumbrance) based on a technical argument that made no sense. When us technical experts in the room pointed out how the argument made no sense, they would repeat that feature X should absolutely use mechanism Y, but now based on a completely new rationale, which didn’t make any sense either.
The real reason they were pushing so hard for mechanism Y, of course, was that they had patents covering mechanism Y and wanted their patented technology to go into the industry standard, but they were unable to make a coherent argument that withstood technical scrutiny for why it was the preferable solution at hand, with or without such encumbrance.
In other word, classic goalpost moving.
[image error]
As a technical team made up of many people from different companies, there would come a time when our patience ran out with assuming good faith for the fake technical rationale presented to get something patented into the standard, as we knew it was made up on the spot but sort of had to play along — but only up to a point, if the party losing the technical argument didn’t give in, didn’t play their part of the game we all knew was happening.
But there’s more to Blockstream’s behavior than just moving technical goalposts.
As I later came into politics, I saw this pattern much clearer – it was in basically every decision in politics. We called it “high reasons and low reasons”. The “high”, or “noble”, reason would be the one you presented to the world for wanting X as policy. The “low” reason, meanwhile, was the one that made you give a damn in the first place about it. These were often not connected at all.
You could spot these “high-vs-low reason” conflicts in the tiny details. For example, somebody would argue for new invasive surveillance to combat terrorism, or so they would say. And then you read a little closer, and the bill text actually says “terrorism and other crimes“, an important part which nobody paid attention to. Two years after passing, it turns out that the surveillance powers was used 98% to fight ordinary teenagers sharing music and movies with each other, and that the original bill sponsor was heavily in bed with the copyright industry.
So the exact same pattern of having one overt and one covert reason was present in politics as well, unsurprisingly. But there’s also another pattern here, one that we shall return to: “We want this feature because of X, or because of any other reason”.
But first, let’s compress the last three years of dialogue between Blockstream and the non-Blockstream bitcoin community:
[BS] We’re developing Lightning as a Layer-2 solution! It will require some really cool additional features!
[com] Ok, sounds good, but we need to scale on-chain soon too.
[BS] We’ve come up with this Segwit package to enable the Lightning Network. It’s kind of a hack, but it solves malleability and quadratic hashing. It has a small scaling bonus as well, but it’s not really intended as a scaling solution, so we don’t like it being talked of as such.
[com] Sure, let’s do that and also increase the blocksize limit.
[BS] We hear that you want to increase the block size.
[com] Yes. A 20 megabyte limit would be appropriate at this time.
[BS] We propose two megabytes, for a later increase to four and eight.
[com] That’s ridiculous, but alright, as long as we’re scaling exponentially.
[BS] Actually, we changed our mind. We’re not increasing the blocksize limit at all.
[com] Fine, we’ll all switch to Bitcoin Classic instead.
[BS] Hello Miners! Will you sign this agreement to only run Core software in exchange for us promising a two-megabyte non-witness-data hardfork?
[miners] Well, maybe, but only if the CEO of Blockstream signs.
[Adam] *signs as CEO of Blockstream*
[miners] Okay. Let’s see how much honor you have.
[Adam] *revokes signature immediately to sign as “Individual”*
[miners] That’s dishonorable, but we’re not going to be dishonorable just because you are.
[BS] Actually, we changed our mind, we’re not going to deliver a two-megabyte hardfork to you either.
[com] Looking more closely at Segwit, it’s a really ugly hack. It’s dead in the water. Give it up.
[BS] Segwit will get 95% support! We have talked to ALL the best companies!
[com] There is already 20% in opposition to Segwit. It’s impossible for it to achieve 95%.
[BS] Segwit is THE SCALING solution! It is an ACTUAL blocksize increase!
[com] We need a compromise to end this stalemate.
[BS] Segwit WAS and IS the compromise! There must be no blocksize limit increase! Segwit is the blocksize increase!
//falkvinge.net/wp-content/uploads/2017/04/movinggoalposts2.mp4
This is just a short excerpt. I could go on and on, showing how Blockstream said that node count was completely negligible when Bitcoin Classic nodes started to pick up and how hashrate was the only valid measure, and how Blockstream is now talking – no, yelling – the exact opposite, when Bitcoin Unlimited is at 40%+ of hashrate.
This pattern is utterly typical for somebody hiding encumbrance in what they’re trying to achieve – for if Segwit locks in, it’s in bitcoin for eternity because of the way the chain is permanent, whether those parts of the chain are used by a particular actor or not.
There’s even more to it. It’s also typical for an actor who’s deflecting like this to try to invoke external enemies. Warhawks in governments have done the same over and over when they want to go to war: be aggressive about a narrative, call out anybody who challenges the narrative as a traitor and a saboteur, and beat the drums of war. It’s tribal, but it works. In this case, Blockstream have singled out two individuals as “enemies”, and people who want to be part of the community are encouraged to be aggressive against them. It’s practically straight out of scenes of the movie 1984.
All to get patents into bitcoin, regardless of whether you burn it and its community to the ground in the process.
That’s the only way their behavior makes sense, and it makes utter and complete sense in that way. I want to emphasize again that I have not read any of the Blockstream patent applications, and it would be pointless to do so as they can be kept secret for something like 18 months, so I wouldn’t have access to the full set anyway. But based on Blockstream’s behavior, I can say with dead certainty that I’ve seen this exact behavior many times in the past, and it’s always when somebody has a dual set of reasons – one for presentation and palate and another that drives the actual course of action.
With that said, Blockstream has something called a “Defensive Patent Pledge”. It’s a piece of legal text that basically says that they will only use their patents for defensive action, or for any other action.
Did you get that last part?
That’s a construction which is eerily similar to “terrorism and other crimes”, where that “and other crimes” creates a superset of “terrorism”, and therefore even makes the first part completely superfluous.
Politican says: “Terrorism and other crimes.”
The public hears: “Terrorism.”
What it really means: “Any crime including jaywalking.”
The Blockstream patent pledge has exactly this pattern: Blockstream will only use their patents defensively, or in any other way that Blockstream sees fitting.
Blockstream says: “For defense only, or any other reason.”
The public hears: “For defense only.”
What it really means: “For any reason whatsoever.”
Let’s assume good faith here for a moment, and that Greg Maxwell and Adam Back of Blockstream really don’t have any intention to use patents offensively, and that they’re underwriting the patent pledge with all their personal credibility.
It’s still not worth anything.
In the event that Blockstream goes bankrupt, all the assets – including these patents – will go to a liquidator, whose job it is to make the most money out of the assets on the table, and they are not bound by any promise that the pre-bankruptcy management gave.
Moreover, the owners of Blockstream may — and I predict will — replace the management, in which case the personal promises from the individuals that have been replaced have no weight whatsoever on the new management. If a company makes a statement to its intentions, it is also free to make the opposite statement at a future date, and is likely to do so when other people are speaking for the company.
This leads us to ask who the owners of Blockstream are: who would have something to gain from pulling the owner card and replacing such a management?
Ah.
The owners of Blockstream are the classic financial institutions, specifically AXA, that have everything to lose from cryptocurrency gaining ground.
And they have bought (“invested in”) a company, which has an opportunity to get patents into the bitcoin blockchain, thereby being able to either outright ban people from using it, or collect a heavy rent from anybody and everybody who uses it.
The conclusion is unescapable here: Blockstream’s constant goalpost shifting has had the underlying goal to have Blockstream’s owners effectively own bitcoin through patent encumbrance.
As horrifying as that statement is, it’s the only way – the only way – that the actions of the past three years make perfect sense.
(This is a post from Falkvinge on Liberty, obtained via RSS at this feed.)

Bitcoin’s Unlimited Potential Lies in an Apolitical Core

Bitcoin – Nozomi Hayase: The ongoing Bitcoin block size debate has accelerated into a kind of civil war. From threats of a 51% attack to online trolls and controversy over the allegation of covert AsicBoost usage, disagreements on scaling solutions have created a toxic environment in the community. With a divide created by slogans of bigger or smaller blocks with Bitcoin Unlimited vs. Core memes, the ecosystem growing around this technology has started to resemble the craziness of party politics.
Guest Article
This is an article by Nozomi Hayase, a writer who has been covering issues of freedom of speech, transparency and decentralized movements.
We have seen a failure of national politics. From the 2008 financial meltdown and bank bailouts to cycles of austerity, unprecedented levels of corruption spawned a global crisis of legitimacy of institutions and governments. This only seems to have gotten worse.
In the US, at the center of financial and political power, the populace has been trapped by a corporate sponsored political charade, with a rigged presidential primary and election of the lesser of two evils. More and more, people are beginning to wake up to the broken promises and failed policies of their leaders, creating conflicts and instability in regions around the world. While solutions provided in the electoral arena have repeatedly shown to be ineffective, Bitcoin presented an alternative -a departure from this system of politics.
Politics as Systems of Power
So what is politics? What are the characteristics of governance designed by it? The Oxford Dictionary defines politics as “activities associated with the governance of a country or area, especially the debate between parties having power.” Politics is inherently associated with power and is a means to organize society through leaders gaining control over the majority.
Western liberal democracy is politically engineered governance. Its fundamental feature is centralization. Rules made from the top are enforced and changes in the system require permission from those who are in positions of authority.
Historian Howard Zinn (1970) noted how:
“In modern times, when social control rests on ‘the consent of the governed’, force is kept in abeyance for emergencies, and everyday control is exercised by a set of rules, a fabric of values passed on from one generation to another by the priests and teachers of the society.” (p. 6)
This command-control style of governance works in hierarchies and is antithetical to democratic values. The integrity of the system depends on success of rulers to foster obedience of those in the network and prevent people from dissenting. For this, managing perception and public opinion through mass media becomes necessary and the system operates under the appearance of democracy, making force of control covert and invisible.
In Democracy INC: The Press and Law in the Corporate Rationalization of the Public Sphere, professor of journalism David S. Allen (2005) described the role of professionals in facilitating this managed democracy. He noted how the creation of expert knowledge is essential in this machination. Science has become a methodology to back professional legitimacy, as “individuals began to regard professional judgments, often supported by scientific data as unquestionable” (p. 54).
The Creed of Objectivity
Professionals with expert knowledge perform the role of trusted third parties who are supposed to represent the interests of citizens and make decisions on their behalf. Here, the knowledge produced in social science, such as economics, political science and psychology are often used to maintain the status quo of power structures. From Alan Greenspan to Ben Bernanke and now Janet Yellen, economists who are appointed by the US President as chair of the Federal Reserve get to decide monetary policy for the country and exercise influence through central banks around the world. What validates their expert knowledge is an epistemological foundation called the creed of objectivity.
Social science has incorporated empirical and positivist methodology of natural science and claimed the ability to form knowledge in a similar way as physical science. With this, researchers assert neutrality as if he or she transcends race, class or any personal bias. Yet, they are embedded within cultural values and their purported value-free objectivity is not actually possible. One’s subjective agendas and personal views do not magically disappear by simply claiming it to be so.
Without transparency that ensures disclosure of researchers’ bias, this creed of objectivity becomes a cloak that hides their motivations. This stance of objectivity closes off any feedback and the assertions that are not tested are promoted as universally applicable truth. Money in this representative democracy becomes political money, legitimatized by state authority and tied to monetary policies of investment banks and corporations that run government behind the scenes.
Replacing Politics with Math
Now, a breakthrough of computer science has found a way to crack this closed logic of control. Bitcoin opens a path for changing the world without taking power. The whitepaper published under the pseudonym Satoshi Nakamoto put forward a vision of a “peer-to-peer version of electronic cash”, based on cryptographic proof, rather than relying on a trusted third party. The underpinning of this innovation was a science of asymmetrical security that provides a strong armory against violence, exploitation and extreme selfishness through a mechanism of consensus.
Richard Feynman, a theoretical physicist once said that scientific integrity is learning to not fool ourselves. He noted, “The first principle is that you must not fool yourself—and you are the easiest person to fool”. In natural science, researchers are given honest feedback from the real world and nature through observation, repeated testing and experiments. On the other hand, social scientists explore dimensions more divorced from physical reality, and in their claim of neutrality, they can become blind to their own bias. This would influence the outcome of their studies and they more easily distort facts with personal opinions and emotions.
This creed of objectivity in social science has shown itself to be vulnerable to tendencies toward deception, while math is a property that is impervious to manipulation. Math cannot be fooled, as it does not respond to lies and threats. Computer science relies on solid data, rigorous testing and peer-review. This gives each person an opportunity to engage in honest work to overcome self-deception and build strong security, even as strong as the laws in the physical world.
Cypherpunks; Scientists with a Moral Code
In the existing model of governance, inherent weakness of the creed of objectivity made the system vulnerable to tyranny of the few. Economic incentives set up by a professional class made the right to free speech exclusive for the beneficiaries of this managed democracy, suppressing any views that challenge this authority. Those privileged in the system call these perspectives subjective, relegating them to mere opinion. This doctrine of false objectivity that has been predominant in academia has conditioned researchers to remain impartial. This turned the populace into passive observers, preventing them from fully connecting with their passion and values.
In the foundation of Bitcoin development, there lies a particular philosophy that revolts against this restriction of free speech imposed by central authority. In the paper The Moral Character of Cryptographic Work published in 2015, eminent computer scientist Phillip Rogaway brought forward the moral obligation of cryptographers and their importance, especially in the post-Snowden era. In this, he described a group that emerged in the late 1980’s who saw the potential of cryptography in shifting power relations between the individual and the state. These are the cypherpunks who held a belief that “cryptography can be a key tool for protecting individual autonomy threatened by power”.
In an interview with Trace Mayer, applied cryptographer and inventor of Hashcash, Adam Back who was cited in Satoshi’s whitepaper, talked about the “positive social implications arising from cryptography”. He described the ethos of cypherpunks as writing code to bring the rights we enjoy offline into the online world. The idea is that lobbying politicians and promoting issues through the press would be a slow uphill battle. So, instead of engaging in legal and political systems, Back noted that they could simply “deploy technology and help people do what they consider to be their legal right” and society would later adjust itself to reflect these values. The cypherpunks, with their adamant claim of subjective domains, apply real objective knowledge that comes from math to bring change.
Solidifying Technology’s Core
As the forced network effect of petrodollar hegemony begins to loosen, the empire fuels aggression, with more wars and sanctions. While this system of representation weakens, the logic of control from the old world began infiltrating the Bitcoin ecosystem. Regulators try to reach cryptocurrency through exchanges, and by enforcing KYC (Know Your Costumer) create a fertile soil for government surveillance and privacy erosion. Centralization creeps in through industrial mining and patents on hardware, with a trend toward state and corporate backed monopolies. All the while, established media keep writing obituaries on Bitcoin, wishing to declare the death of this new money they can’t understand.
Politics that spread through the crypto-community have been hijacking discussions on technical development. With PR, name-calling and smear campaigns, a vocal minority engages in social engineering, distracting developers who are engineering security. This drama that some perceive as Bitcoin’s existential threat brings a crisis, yet at the same time is giving us all an opportunity to solidify our commitment to this technology’s fundamentals.
Bitcoin as a premise of stateless money has brought many people together. These are free market enthusiasts, traders, libertarians, engineers, venture capitalists, anarchists and artists. Bitcoin is a disruptive technology that has large political implications. Yet, for it to manifest its true potential, we must not forget its roots in its apolitical nature –solid science. This apolitical nature is not a bug, but a feature. This is what makes Bitcoin stateless money, censorship resistant, unseizable and permissionless.
Imagination from Computer Science
Legal scholar and inventor of bit-gold Nick Szabo once noted: “Computer science gives you far more leverage to change the world than any other study in our age.” Social issues and questions of democracy have been a philosophical quandary that are generally tackled politically. They were not considered to be the purview of science. Yet now, imagination from computer science has come forward to help us work on solving these problems.
Our commitment to decentralization keeps this consensus algorithm running across the global network and allows all to participate in this scientific endeavor of Proof of Work – to show the world that equality, fraternity and freedom are not just ideals, but unshakable universal truths.
So, let us call a ceasefire in this political battle and engage with the honest work of collaborative efforts of writing code. By moving from a system of power to a consensus of equal peers, together we can find solutions to overcome challenges. From this secure foundation provided by this technology’s core, unlimited potential can be unleashed, which creates divergent currencies to carry the wishes and desires of many communities. Where politicians and leaders have failed, Bitcoin succeeds. Our surrender to this scientific process opens a door for development of protocol and gives innovation a chance for humanity to save itself from the mess we have created.
(This is a post from Falkvinge on Liberty, obtained via RSS at this feed.)

April 24, 2017
What Australia can learn from Europe’s failure with Data Retention

Australia: This month, Australia’s law mandating telecommunications data retention went into effect. It is clear that Australia learned absolutely nothing from Europe’s abysmal 10-year failure with this exact law before it was finally struck down by courts as utterly incompatible with human rights at the core of its idea. Here’s how Australia can fail a little faster on this horrendous concept by realizing it’s just not inexcusable, it doesn’t even work.
In the wake of the 2004 Madrid bombings, a handful of hawks saw their opportunity to pass unprecedented mass surveillance legislation, where people could be retroactively wiretapped – something that was only possible if everybody was continuous wiretapped, all the time, so it could be retroactively reviewed. Now, actual wiretapping would not have flown, so they went with the politically-new word “metadata”, which didn’t sound nearly as bad but was conceivably much worse because it was machine-sortable: Everybody’s communications metadata would be stored for a long time with the sole objective of using it against them.
It was just four people – as little as four people out of five hundred million – who were ultimately driving this disaster into being in Europe, much through deception and Potemkin façades. In Sweden, the concept was driven pretty much only by the then-minister-of-Justice Thomas Bodström, and skilled activists at the time traced how he couldn’t get the Swedish Parliament to approve anything like it (for good reason), so he went for the infamous legislative “Brussels Boomerang” instead: make it a federal law at the EU level, and tie the hands of the Swedish Parliament to do it regardless of their opinion. There were three other like-minded people from other states, and that was all it took for the proposal to gain momentum at the Brussels level.
On December 14, 2005, the European Parliament approved a mandate for all states to implement “telecommunications data retention”, or as it would be more accurately described, “preemptive ongoing wiretapping of everybody in case we decide we want it later”. The purpose is to combat “terrorism and other crimes”. That little “and other crimes” turned out to include basically everything, up to and including jaywalking – and in practice, it would be almost exclusively used to hunt ordinary people sharing music and movies outside of the monopolized copyright channels.
So all of a sudden, everybody’s activity was recorded – along with timestamps and their precise geographical position – whenever they did the most minute form of communication. It was a mass tracker.
The problem is that surveillance of innocents in case they should become suspects later is fundamentally incompatible with a democracy.
However, this one didn’t go over well in Europe, even with a decision from the federal European parliament. A full one-third of European states – nine out of 27 – refused to implement the preemptive surveillance of innocents, seeing it for what it was. In other states like Germany, it was implemented and immediately struck down by their constitutional court, for good reasons.
In pushing for acceptance, there was no shortage of Potemkin façades and misdirection from politicians. An example of the talking points used:
“Telecom companies have always recorded this”: No, they haven’t. In fact, they have been absolutely, positively banned from recording any of this, except – except for what was absolutely needed for billing purposes. Data retention switched bulk collection of everything from “absolutely forbidden” to “mandatory”, and that’s not the small change they wanted to pretend it was.
“It’s not government surveillance, it’s the telecoms recording your activity”: As if conscripting a corporation into a most unwilling agent of the government made it not the government’s action any more. This is a particularly disgusting way to deflect responsibility for your actions.
“It’s necessary to prevent terrorism”: Except it was absolutely useless for this, and used in practice only to punish ordinary file-sharing people.
On the other side of the fence, you had a few diligent politicians like Malte Spitz in Germany, who used his own data to show people just how horrible the tracking was – he made a YouTube video showing that he could essentially be followed block by block as he was going about his daily business, and also held a TED Talk about it.
Activists also kept pushing, relentlessly, providing actual data that politicians didn’t want to exist. The German AK Vorrat – loosely translated as “working group, data retention” – was one of the more visible ones, and who pointed out that the collected data had only hade a difference in 0.006 percent of criminal cases.
Zero point zero zero six per cent.
In most countries, that’s the equivalent of hiring two or three extra investigative police officers, but at the cost of ordinary police pay instead of the data retention’s cost of about a billion dollars per year (or much more). In other words, the data retention is not even effective in the best of cases – neither for police operations nor for cost-efficiency. You could have solved something like 1,000 times more additional crimes for the same amount of money, just by hiring regular investigating police officers doing ordinary honest police work instead of treating everybody as a suspect.
Now, fortunately, it wasn’t just activists pushing back. Since the governments had audaciously told the telecoms operators to foot the entire bill for this, they were not happy and had a real financial interest in scuttling this construct. That, in the end, is what caused the data retention’s undoing.
It was billions of dollars of cost for the telcos that was the ultimate driver to end data retention. It was the human rights principles that gave those telcos the right of way in court.
Because the telcos challenged the mandate to retain data – the most customer-focused ones flat out refused to comply, saying “take us to court”. The government didn’t, but took them to their own authorities instead (the US FCC equivalent), at which point the telcos took those authorities to court.
And won.
Once the courts had ruled that telcos were no longer required to store all metadata, and importantly, absorb all the cost for doing so, data retention was dead in practice. But it would take another couple of years to really drive the point home.
The legal escalation went all the way to the European Court of Justice (ECJ), which is the European equivalent of a Supreme Court. This escalation took a decade in total, but on April 8, 2014, the European Court of Justice ruled that the Data Retention Directive – the EU “federal law” – so utterly incompatible with human rights, that the court didn’t just declare it not in effect from that date; the ECJ ruled that it had never been in effect, annulling it retroactively and effectively erasing it from existence as a mark of shame. The court couldn’t have put its foot down any harder.
Most politicians in European states at the time noted that while they were now not mandated to preemptively violate every citizen’s privacy, there was not yet any ruling banning them at a federal level from doing so, and they sought to tweak details in their “safeguards” to keep the constructs. This missed the point of the ECJ entirely:
The problem isn’t that the data isn’t properly secured, or who has access to it and when. The problem is that surveillance of innocents in case they should become suspects later is fundamentally incompatible with a democracy. It is the core idea that is broken and unacceptable, not the details of implementation.
This disconnect baffled the courts entirely, as their key point had been made perfectly clear in the 2014 ruling: such a construct is incompatible with a democracy. Why did politicians persist in pretending it was a matter of implementation details, and not the core idea? More importantly, why was this still happening in individual states, even though there was no more federal directive mandating it?
Hawk politicians in those individual states were arguing that while the European states were no longer required to have data retention at the federal level, they were also not forbidden from having it as a state initiative, and continued it on the state level that had been initiated by the federal law now shredded by the ECJ. This position at the state level could only have come from somebody who didn’t read the fuming verdict from the European Court of Justice in 2014, as it tore up the Data Retention Directive by its roots and lit it ritually ablaze expressed in the strongest anger that judicial language is capable of expressing.
So in the judicial equivalent of “didn’t you morons hear us the first time”, the ECJ finally ruled in December of 2016 that all European states are utterly forbidden from mandating data retention from its telecommunications providers. This gave the telcos who had been objecting all along wind in their sails, and most of them deleted all the collected data on the same day to trumpet fanfares and advertising. Meanwhile, the politicians who had been advocating these violations of human rights muttered increasingly incoherently, and have not been heard from again so far, six months later.
In conclusion, while Europe had its turn with the hated Data Retention, it would take the courts twelve years to undo it. Let us at least hope that others can learn from this mistake and not have to do all of it all over again.
Privacy remains your own responsibility, as always.
Syndicated Article
This article was previously published at Private Internet Access.
(This is a post from Falkvinge on Liberty, obtained via RSS at this feed.)

April 23, 2017
With laptops banned onboard aircraft, your data is no longer yours if you fly

Privacy: New US regulations ban laptops on board some aircraft, requiring laptops to be in checked luggage. One of the first things you learn in information security is that if an adversary has had physical access to your computer, then it is not your computer anymore. This effectively means that the US three-letter agencies are taking themselves the right to compromise any computer from any traveler on these flights.
According to the United States Ministry of Peace Department of Homeland Security, which bills the ban as a “change to carry-on items” that affect “ten out of the more than 250 airports that serve the United States internationally”, there is a “security enhancement” because explosives can now be built into “consumer items”, and therefore laptops must now be banned from carry-on luggage and instead checked in.
When looking at this justification, the DHS notably fails to describe how it would be any safer flying with such alleged explosives in checked luggage rather than carry-on luggage onboard the same aircraft. In other words, the justification is utter nonsense, and so, there must be a different reason they issue this edict that they’re not writing about.
“The aviation security enhancements will include requiring that all personal electronic devices larger than a cell phone or smart phone be placed in checked baggage at 10 airports where flights are departing for the United States.”
When Microsoft (finally) trained every single one of their employees in security in the big so-called “security push” around the turn of the century, there were about a dozen insights that they really hammered home, again and again. One of the most important ones related to this was the simple insight of “if an adversary has had physical access to your computer, then it’s not your computer anymore”.
After all, if somebody has had physical access to the machine itself, then they will have been able to do everything from installing hardware keyloggers to booting the machine from USB and possibly get root access to some part of the filesystem – even on a fully encrypted GNU/Linux system, there is a small bootstrap portion that is unencrypted, and which can be compromised with assorted malware if somebody has physical access. They could conceivably even have replaced the entire processor or motherboard with hostile versions.
This is a much more probable reason for requiring all exploitable electronics to be outside of passengers’ field of view.
Remember that both the NSA and the CIA have a history of routinely pwning devices, even from the factory, or intercepting them while being shipped from the factory. (There was one incident where this was revealed last year, after the courier’s package tracking page showed how a new keyboard shipped to a Tor developer had taken a detour around the entire country, with a remarkable two-day stop – marked “delivered” – at a known NSA infiltration facility.)
Now imagine that the laptops and other large computing devices of these travelers — remember that the Tor developer in question was an American citizen! — that these devices will be required to be surrendered to the TSA, the CIA, the NSA, the TLA, and the WTF for several hours while inflight. It’s just not your device anymore when you get it back from the aircraft’s luggage hold – if it was ever there.
If your laptop has been checked in and has been in the TSA’s control, it can no longer be considered your laptop. Any further login to the compromised laptop will compromise your encrypted data, too.
The choice of the ten particular airports is also interesting. It’s the key airports of Dubai, Turkey, Egypt, Saudi Arabia, Kuwait… all predominantly Muslim countries. Some have pointed this out as racial profiling, but there are signs it may be something else entirely and more worrying.
For example, the Intercept presents the measure as a “muslim laptop ban”. The might or might not be an accurate framing, but the worrying part is that this is a best case scenario. More likely, it is a so-called “political test balloon” to check for how much protesting erupts, and to put it bluntly, if they get away with it. If they do, then this can be a precursor to a much wider ban on in-flight laptops – or, as you would more correctly have it, a much wider access for three-letter agencies to people’s laptops and data.
Syndicated ArticleThis article was previously posted at Private Internet Access.
(This is a post from Falkvinge on Liberty, obtained via RSS at this feed.)

April 15, 2017
Airport: “We’re tracking every single footstep you take and can connect it to your mail address, but your privacy is safe because we say so”

Netherlands: A sign on the revolving entrance to a European airport, one sign among many, says “the airport employs wi-fi and bluetooth tracking; your privacy is ensured”. The vast majority of people have no idea what this is, and just see it as one sign among many like “no smoking”. But this cryptic term is an indoor mass real-time positioning for every individual in the area at the sub-footstep level, and the airport knows who many of the individuals are.
When your mobile phone has wifi activated, it transmits a network identity signal – a MAC address – continuously in asking its environment for which Wi-Fi networks are available in the area, looking for known networks to connect to. It is possible to use a network of wi-fi base stations in a limited area to pinpoint the exact physical location of your phone in that physical area – by comparing the signal strength of your phone’s identity ping from multiple stations receiving it, and knowing where those stations are, your phone’s location can be calculated in microseconds with sub-footstep accuracy. And as your phone keps pinging its environment to search for wi-fi networks, the end effect is that your movements are tracked basically down to the footstep level in physical space and the second level on the timeline.
We know this because it caught some rare attention when the sub-footstep individual realtime tracking was rolled out citywide in a mid-size town in Europe, and was cracked down on by that state’s Privacy Board (thankfully).
Let’s take that again: sub-footstep-level identified realtime tracking has already been in place in major cities at the city level.
That’s why it’s positively infuriating to see this vague statement on an airport door, and knowing what it means, but also knowing that it has been deliberately worded to make 99% of people even reading it not understand what it means. And most people won’t even read it.
[image error] “This terminal uses a Wi-Fi and Bluetooth-based tracking system. Your privacy is ensured.” Also, war is peace, freedom is slavery, and ignorance is strength. My privacy is “ensured”? Am I supposed to be grateful because you’re not yet demanding a blood sample?
Your privacy is ensured “because we say so”, apparently.
It gets worse. Your phone is continuously broadcasting its MAC address, right? But the airport has no way of knowing which MAC address belongs to which person, right? So even though this is bad, the airport doesn’t know how you moved, when, and where, and how fast, and with whom… right?
Except, the tracking is done with a Wi-Fi network, remember? Which you can and will connect to, because it broadcasts the name “Free Airport Wi-Fi”. And the only thing you need to provide this network in order to get free and fast net access, thereby connecting with the very MAC address which has been tracked, is your email address.
…and suddenly the airport can connect your physical movements to your email address, and therefore most likely to your identity. It now has the ability to know how you moved through the airport, where you ate, whom you met with, for how long, the list that goes on, a list that shouldn’t be there. The only safeguard against it not doing so and using it against you is a vague “trust us”.
It’s not a random small countryside airport, either. It is Amsterdam-Schiphol, which is the 12th largest airport in the world, and the third largest in Europe after London-Heathrow and Paris-Charles-de-Gaulle.
Your privacy remains your own responsibility.
Syndicated Article
This article has previously been published at Private Internet Access.
(This is a post from Falkvinge on Liberty, obtained via RSS at this feed.)

April 10, 2017
Would U.S. Congress find it acceptable that their phonecalls were recorded, sold, and published?
Repression: The United States Congress has decided that Internet Service Providers shall be Common Carriers but without the obligations of a Common Carrier. Specifically, which was the shocker recently – telephone secrets don’t apply as they do with other telecommunications providers, and ISPs are also free to modify anything they like without liability for it. This was an unexpected development of the FCC upgrading the Internet to a so-called Title II Utility. But what would happen if we took this to its conclusion and started publishing Congress’ intercepted phone calls, as they have decided can be done with Internet traffic?
The United States Congress decided that Internet Service Providers can not just monitor and sell your Internet traffic, but also modify it, which has been met with a justified storm of criticism. Some of the blowback states that “this has been the case for a long time and up until recently”, and this Congress decision merely “restores” things to how they were before – and this is where it gets interesting: what happened “recently” – more specifically on June 12, 2015 – and which Congress seemed fit to reverse, was that the FCC ruled the Internet to be a so-called Title II Utility: a utility on par with the telephone network. This FCC ruling turned the Internet Service Providers into so-called Common Carriers.
When you’re communicating on the telephone network, the telco can’t record and sell your conversations. Privacy laws prevent this.
What makes it reasonable, then, that a voice call which is technically transmitted over the Internet can be recorded and sold? Where just the method of transmission is different?
Here’s the kicker in this context – most telco voicecalls are already transmitted over the Internet. It’s simply much cheaper for telcos to use the simple Internet TCP/IP network for transmission than their own expensive and complicated SS7 network, so this migration has been silently going on for the past two decades.
This leads us to the following question:
Since Congress’ telephone calls are most likely transmitted over the Internet, would they find it acceptable that those telephone calls were recorded, sold, and published, as they have just decided?
Of course, they would go ballistic. They would feel… betrayed. Violated. Just as we feel, because we understand the decision they made, and they don’t.
It is hard to overstate the technical illiteracy of today’s lawmakers. They are basically behaving like a drunken elephant trumpeting about in a porcelain factory.
To give you one concrete example of this, the UK Home Secretary went on record saying there has to be people who understand the necessary hashtags to prevent hate speech from getting published on the Internet.
Yes, you read that right. The. Necessary. Hashtags. And this is the person ultimately responsible for these issues. Would you trust their decisions to be well informed?
When working in the European Parliament, I observed lawmakers get their email printed for them by their secretary and handed to them as papers on their desk. These are the people making federal law about the Internet.
Today’s lawmakers are basically behaving like a drunken elephant trumpeting about in a porcelain factory.
As a side note, this was observed as early as 2006, when the protests against the (first) raid against The Pirate Bay had signs saying “give us back our servers, or we’ll take your fax machine”.
Going back to the question how lawmakers would react if their phonecalls were published, they would probably press charges for wiretapping, revealing that they just created two laws that are completely in conflict with each other, and highlighting what we’ve already concluded: the expectation of privacy comes from making a phonecall, and not from a deep understanding about how that phonecall is going to be transmitted at the technical level – whether it’s routed through the expensive telco SS7 networks (not okay to eavesdrop) or the inexpensive TCP/IP networks (okay to eavesdrop, modify, and sell, now).
This is where we’re getting to the heart of the issue: Lawmakers don’t understand Analog Equivalent Rights.
They don’t understand that it’s completely reasonable for our children to have at least the same minimum of civil liberties as our parents had, as it translates to their everyday environment. This obviously includes an expectation of privacy when holding a private conversation, regardless of the transmission technology used under the hood.
Privacy remains your own responsibility.
Syndicated Article
This article has previously appeared at Private Internet Access.
(This is a post from Falkvinge on Liberty, obtained via RSS at this feed.)
[image error]
March 31, 2017
When good loses to lawful: this thing about proper legal procedures with indefensible outcomes

Repression: It’s interesting to watch people rushing to defend the legal processes in last week’s story about a man jailed indefinitely for refusing to decrypt, and who are asserting that everything is in order. In doing so, they point at individual details of the legal process and say there’s nothing odd about the details, and disregard the outcome that somebody is in fact indefinitely in jail, without charges, for encryption. It doesn’t matter if each and every step is defensible: if that’s the outcome, the system as a whole is really really bad, including the individual steps.
Lots of people were rushing to defend the fact that a man is in jail indefinitely for refusing to decrypt, and trying to spin the story as though this definitely didn’t mean that you can be put in jail for encryption. This is the worst kind of “good bureaucrat” behavior – one who can’t see an evil outcome right when it’s in front of them, just because the individual steps are familiar.
@Falkvinge @MayaKaczorowski no it hasn't. It's not that broad, it's about them compelling someone to turn over evidence—compulsion under 5A
— Whitney Merrill (@wbm312) March 21, 2017
Here’s an example. “He was ordered to turn over evidence, but didn’t, and so the judge had a right to hold him in contempt of court”. Yes, and the outcome is still that somebody was put in prison indefinitely for encryption. Regardless of who had what right along the way, the entire chain of such rights is unambiguously evil and really bad – including the individual, procedurally correct steps making up that chain.
@Falkvinge @MayaKaczorowski no it has a significant legal distinction. It's not settled law, I gave a talk on this at Shmoocon in 2015.
— Whitney Merrill (@wbm312) March 21, 2017
Here’s another example: “There’s no law saying that encryption is illegal, and the legal community is divided on whether this is correct.” That’s factually true but doesn’t change things a bit: when you can be put in prison indefinitely without charges for encrypting, somebody saying “that may or may not happen” is not contradicting the fact that it can, in fact, happen – as proven by the fact that is has already happened, if nothing else. And when you can be put in jail for encryption, the net effect is that it’s been outlawed — even if you sometimes aren’t.
@Falkvinge @Carlos_Perez No, they've just upheld their right to ask you to decrypt so they can examine the evidence.
— Walter Williams (@LESecurity) March 21, 2017
Another: “No, they’ve just demanded the evidence as they have a right to do, and if he doesn’t produce it, he can be held in contempt of court.” If this their right results in somebody being imprisoned indefinitely without charges for encryption, then again, the system is broken, and that is regardless of whether each and every step along the way is formally correct.
Yet a fourth reaction – with no example on Twitter just here – has been “yes, the fifth amendment is supposed to work like this”: as if this was something that concerned only the Fifth Amendment, or only the United States of America.
This knee jerk reaction of defending the situation as “procedurally correct”, when somebody is indefinitely in jail without charges for refusing to decrypt, is the absolutely worst possible approach – it is the “but it is lawful” approach, without regard for consequences. It is the “good bureaucrat” reaction.
It is absolutely crucial to understand that lawful does not mean moral or good. There’s lawful good and lawful evil, and they are not the same thing. Often, you must choose between lawful and ethical. Somebody who produces an evil result, as here, by following the law to the letter is choosing to uphold an evil system, and when push comes to shove, following instructions is never a defense – neither morally nor legally.
But the judge, Theodor Seidel, said as he pronounced the sentences [against the former Berlin Wall guards], “Not everything that is legal is right.”
It is this trickery with procedure to produce (un)desirable outcomes that is common in the worst parts of politics, the inability to focus on what’s actually happening but instead trying to justify the procedure leading up to the evil result. “The marriages were lawful”, as the prison guard said on the record.
Isaac Asimov explores this a lot in his Robot series of novels – how changing small parameters of procedures, and have everybody follow procedure, can lead to dramatically evil outcomes. One famous example from his novels is how a roboticist designs an autonomous fighter craft powered by AI – and which is designed to assume that other ships like it are also robots which can be destroyed in warfare, but which isn’t the case since they’ll be human-piloted enemies, leading to a clear violation of the First Law which the robotic fighter craft can’t knowingly break — so it is designed to do so unwittingly by following procedure.
This is what happens when procedure is followed, and principle lost.
Don’t let that happen.
Syndicated Article
This article was previously published at Private Internet Access.
(This is a post from Falkvinge on Liberty, obtained via RSS at this feed.)

March 28, 2017
With shock appeals ruling, the United States has effectively outlawed file encryption

Privacy: An appeals court has denied the appeal of a person who is jailed indefinitely for refusing to decrypt files. The man has not been charged with anything, but was ordered to hand over the unencrypted contents on police assertion of what the contents were. When this can result in lifetime imprisonment under “contempt of court”, the United States has effectively outlawed file-level encryption – without even going through Congress.
Last week, a US Appeals Court ruled against the person now detained for almost 18 months for refusing to decrypt a hard drive. The man has not been charged with anything, but authorities assert that the drive contains child pornography, and they want to charge him for it. As this is a toxic subject that easily spins off into threads of its own, for the sake of argument here and for sticking to the 10,000-foot principles, let’s say the authorities instead claim there are documents showing tax evasion on the drive. The principles would be the same.
Authorities are justifying the continued detention of this person – this uncharged person – with two arguments that are seemingly contradictory: First, they say they already know in detail what documents are on the drive, so the person’s guilt is a “foregone conclusion”, and second, they refuse to charge him until they have said documents decrypted. This does not make sense: either they have enough evidence to charge, in which case they should, or they don’t have enough evidence, in which case there’s also not enough evidence to claim with this kind of certainty there are illegal documents on the drive.
In any case, this loss in the Appeals Court effectively means that file- and volume-level encryption is now illegal in the United States.
Without going through Congress, without public debate, without anything, the fuzzy “contempt of court” has been used to outlaw encryption of files. When authorities can jail you indefinitely – indefinitely! – for encrypting files out of their reach, the net effect of this is that file level encryption has been outlawed.
So were there illegal documents on the drive? We don’t know. That’s the whole point. But we do know that you can be sent to prison on a mere assertion of what’s on your drive, without even a charge – effectively for life, even worse than the UK law which will jail you for up to five years for refusing to decrypt and which at least has some semblance of due process.
The point here isn’t that the man “was probably a monster”. The point is that the authorities claimed that there was something on his encrypted drive, and used that assertion as justification to send him to prison for life (unless he complies), with no charges filed. There’s absolutely nothing saying the same US authorities won’t claim the same thing about your drive tomorrow. Falsely, most likely. The point is that, with this ruling, it doesn’t matter.
March 22, 2017
Great: Now your sex toys are used to spy on you and sell your private habits, too

The makers of an Internet-connected sex toy have settled to pay a small amount to some 300,000 owners of a vibrator which was used to spy on their sex habits, which the manufacturer collected as individually identifiable data. Additionally, the bluetooth-controlled sex toy device was utterly insecure, allowing remote anonymous administration. In the mess of IoT devices spying on us, we now need to add the bedroom.
In Las Vegas in 2016, at Defcon, hackers g0ldfisk and followr originally disclosed the We-Vibe vibrator vulnerability, observing that anybody in bluetooth range could take control of the device. As the duo noted during their presentation, such an intrusion would amount to sexual assault – meaning we can now add sexual assault to the list of possible consequences of unsecured IoT devices.
This vulnerability – along with a shockingly audacious and undisclosed data collection about its users’ sexual habits, like temperature and sexual intensity, collected insecurely as identifiable data connected to their e-mail addresses – has led up to the class action lawsuit, which has been settled now. The manufacturer, We-Vibe, will pay four million Canadian dollars – expecting this to result in maybe C$500 for a violated individual at best.
The lawyers for the anonymous plaintiffs contended that the app, “incredibly,” collected users’ email addresses, allowing the company “to link the usage information to specific customer accounts.” — US NPR
This is just the start of devices made by engineering morons who may understand their original field – sex toys – but have absolutely no clue about Internet-level security. They are not alone: corporations as large as the biggest banks enjoyed the comfort of having a private network up until just recently, and have had to wake up in a hurry to the fact that all input must be regarded as hostile until proven friendly. The engineering principle of “your code is the last piece of code standing” was something that woke Microsoft up as late as fifteen years ago, and they were late in the IT game, but that’s nothing compared to non-IT players wanting in on the Internet of Things and the Fun Profitable Apps who still haven’t learned.
We can add sexual assault to the list of possible consequences of insecure IoT devices.
Maybe the most egregious thing about all this is that the vibrator maker continues to collect the private data, just with a “clarified” privacy policy, where two things immediately stand out. First, the collection of sex habit data is opt-out, meaning that your sex life will be spied on unless you take active action to not have it be so (having this “opt-out” is strictly illegal in several parts of the world, and for good reason). Second, they reserve the right to sell such data to anyone they like, but dress it in language suggesting the opposite: “We will never sell your usage data to a third party … except for as specified in our policy”. That last part makes the first part completely useless; what this means is “we will sell your usage data to a third party as specified”.
Maybe the most egregious thing about this story is that the vibrator maker continues to collect the private data, just with an obscure-and-opt-out privacy policy saying so.
Your privacy indeed remains your own responsibility.
(This is a post from Falkvinge on Liberty, obtained via RSS at this feed.)

Rick Falkvinge's Blog
- Rick Falkvinge's profile
- 17 followers
