Reply to thread

Facebook, in my mind, is much worse in terms of their lack of transparency. But there's also the structural aspects of how the two do business: Google ultimately uses user data to serve ads in dedicated channels, and even if there's a certain ‘creepiness’ factor to the lengths they go to track behavior, all they’ll ever do is change what ads they’re pushing to you. Search will always work as it does, email will never work differently, and so forth.


Facebook, on the other hand, has already demonstrated that they have no issue with performing social experimentation and engineering in order to support their business. They've done broad-based tests across hundreds of thousands of users whereby the emotions of a subset were manipulated by disproportionately feeding them negative posts. They're actively curating content in a way that isolates individuals from news and information they don't prefer. And, of course, the integration of users with businesses is often surreptitiously used in a manner that implies endorsements or commercializes an individual's content, usually without their explicit consent.


Given that many people rely on Facebook as a tool to manage a huge portion of their social life, the repercussions of this are immense. Yet people don’t seem to understand this in their use of the service, because they aren't aware that Facebook has free license to do these things, and consequently takes advantage. And this is all compounded when you consider the lengths Facebook is going in order to drive up the amount of time using the platform (time is money for them) - usually at the expense of any and all socialization outside of it.




Governments will always want more information and more access. From their point of view, there’s no real downside in having it, since the cost of trying to make it useful is easily covered by national budgets. And the upside has the potential to be massive: such data can help uncover illicit activity, it can directly fuel geopolitical strategy and espionage, and it can (emphasis on can, not is) be a useful counterterrorism tool. Taking the perspective of a government, why wouldn't you try to gather as much information as possible?


The only real forces acting against this tendency are existing laws, and the outrage of citizens. In the United States, certain privacy and other protections are codified in some of our oldest and most enshrined laws/bills, and to this day we’re only beginning to understand what they provide to us in terms of privacy in the digital age. Opposing this is the tepid response we've seen by voters and politicians alike to the revelations disclosed by Edward Snowden et. al., which in and of itself does a lot to explain why little seems to have changed with respect to government operations in the digital space.




I understand your frustration, but I think there are better examples of what you mean. I consider myself very privacy-minded, but this is actually an example where I would advocate for mandatory tracking of insured vehicles, not against.


The data collected by these devices for insurance companies isn't particularly valuable to anyone other than the driver and the insurer itself. If Progressive was hacked, and that data was dumped into the public, for example, it would hardly be the sort of data that could harm any one individual. In fact, it would only really be useful in the aggregate anyway, since perhaps some researchers or competitors could learn about driver behaviors in a way they otherwise wouldn't have been able to. So, the "cost" of this collection is low because both the risk and the potential damage are themselves low.


Now look at the positives. Without these trackers, car insurers have no idea how aggressive someone drives on a regular basis - they can't really tell if someone speeds consistently, brakes abruptly, takes turns far too quickly, and so forth. Yet, insurers know that these behaviors correlate very strongly with more accidents. Consequently, they're forced to not account for driving behaviors in the rates they charge, even though they matter a lot - meaning that safer drivers are effectively subsiding the rates of aggressive ones, since they don't appear any different to the insurer.


With these data trackers, this dynamic is all but eliminated - an insurance company not only knows if you're a safe driver, but is forced (due to competition) to offer you a lower insurance rate, since they know that you'll cost them less in the long run. This gives safe drivers a huge incentive to maintain their habits. Meanwhile, aggressive drivers (who drive in a manner that is tangibly more dangerous) are hit with higher rates, and thus have a huge incentive to change their behavior and drive safer.


So, through a simple and fair mechanism, you can now encourage safer drivers to keep it up, encourage riskier drivers to improve, and provide more competitive services for the price. Meanwhile, traffic accidents and fatalities go down, since everyone has newly realized, strong, and active incentives to drive in a safer manner.


That all seems like a very significant positive in the aggregate, compared to the negative of collecting that information. To me, at least, it's well worth it.


...Anyway, all of this to say that, although "normalcy" is never something we should try to muscle people into, optimizing societal outcomes through incentives (not rules) is actually a fairly brilliant way to go about realizing some sort of outcome, such as reducing traffic fatalities, or providing a more competitive and less risky insurance rate. Where the debate should lie, I feel, is in domains where information is much more costly, or where the outcome isn't obviously positive.




Well, with hackers who are installing malware, bets will always be off, right? That’s an InfoSec issue more than a “privacy” issue insofar that we never trust hackers, but place at least some trust in the apps and devices and services we use. The privacy issue is one of how that trust works, what rights we have, how we can ensure that data exchange has consent and other protections, and so on.


Really, the best thing we could hope for in this realm are stringent laws that require clear disclosure about how someone might use certain kinds of information. Many countries have these sorts of rules in finance and healthcare. I don’t know the feasibility of universalizing them, though.


Of course, then you have to consider how these laws would interact internationally, and that's just a horrible mess that might never be solvable. Data in transit may only be as protected as the least regulated country it passes through.




I’d worry a lot more about bad actors than about governments. Governments have geopolitical motives that have a negligible impact on individuals. Companies and individuals usually have a motive of profit, reputation, or simply anger, and all of those are far more destructive to people.


With this whole privacy discussion, I think it’s helpful to consider it a game of incentives, and to see data as just another form of currency. People want information because it has value, and people want privacy because the disclosure of certain kinds of info means forgoing the value that it contains. Just as we regulate and provide safeguards for how money and goods/services are moved around and changed, I think that we need to consider information as just another tangible good, even if it’s technically an intangible 'thing'. Because when you think about how we regulate money:

  • If you own it, it's yours, you have total control of it, and nobody can claim it as their own.
  • If you exchange it, you have a right to know exactly what you're getting for it, and what the other party is getting in return.
  • If you give it, you forfeit all right to own and/or control it.

...that's all basically how we should probably regulate data, too.