Privacy, data-mining and spyware, does it affect you ?

Privacy, data-mining and spyware, does it affect you ?

  • No, I don't mind at all

    Votes: 1 14.3%
  • No, but I do worry where it's headed

    Votes: 1 14.3%
  • I don't feel strongly yes or no

    Votes: 0 0.0%
  • Yes, but I choose to live with it

    Votes: 3 42.9%
  • Yes, I don't want this

    Votes: 2 28.6%

  • Total voters
    7

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,827
4,902
Disclaimer: this is my view and my view alone. I want to start a discussion about this and I hope we can keep it civil in any direction it goes.

With many companies admitting they do it, many hiding it, some living off of it and some claim to provide better services because of it, we can't ignore the amount of data-mining going on in today's websites, software and devices.

I personally feel butt-hurt, as someone who has witnessed the internet clawing its way from 56K up to Gigabit fiber you can have today. The line where we went from free-loading on websites that were somehow sponsored to costs being covered by selling your profile is unclear to me, but it seems Google was one of the first to profit from selling our usage behavior, interests and preferences. While I can't be mad at Google for this because they don't really hide this fact, more and more companies do and dodge their responsibility (in my opinion) of telling you straight they are selling your activity, without hiding it in their EULA behind layers of ambiguous context open for interpretation.

Even many, seemingly all, governments seem to hop on the bandwagon and don't care about a person's rights for privacy. Why do I care, personally ? Because this world seems to be slowly changing into one where all non-conform behavior is being outlawed. For example, recently a few car insurance companies in Belgium were openly very content with their new project: tracking people's driving behavior in cars and base their insurance costs on how gently you drive. What's the baseline driving ? What about sporty cars with superior handling, acceleration and braking ? What about sudden maneuvres because some idiot cuts you off ?

But GPS in cars is not new, maybe your smart phone's GPS is selling your routes and passing POIs to a third-party ? How about anti car theft GPS, is it also logging your every move so a company can offer you "personalised" ads ? We don't know without detective-style investigation because these companies hide behind their blurred EULAs and don't feel any pressure from the government to be transparent.

Recently Europe revised the Safe Harbor innitiative which was supposed to protect the privacy of European citizens, because making sure data is kept securely overseas doesn't mean it's private when America has the their own set of rules and the Patriot Act. This brings up questions if we can really trust just any software or service with our data. More and more people are lured into installing apps only to find out later it was free because you were the product. Or that it was installing malware, like an iOS jailbreak from China a while ago. Or how about stuxnet and other malware that's on a whole new level of stealth ? It took years to uncover some of the most devious malware, linked to governments in every corner of the world, yet every day people still fall prey to CryptoLocker because of sheer ignorance.

At what point can we stop saying this is for the greater good and start saying this is right into Orwell's "1984" nightmare ?
 
Last edited:

jtd871

SFF Guru
Jun 22, 2015
1,166
851
Phuncz,

I don't disagree with the general thrust of your rant.

I do take some small solace, however, in knowing that many implementations of big data harvesting are flawed or are of limited value - either due to time, limited information or space. For example, my Google profile has me pegged as a computer scientist due to the websites (like this one) that I frequent, rather than my actual occupation.

My wife has worked in online advertising for years, and it seems that many of the campaigns she works on get optimized for "clickthrough" - never mind that maximizing that particular metric results in useless clicks as a great many clicks are generated by web-bots and spiders. It seems that actual real people (Nigerian inheritance and Cryptolocker scams notwithstanding) don't very often intentionally click on ads.

I do sympathize with your wish that privacy statements were more plain language. "Yes, we collect data based on your interaction with our website. Yes, we sell that data to others. If US law enforcement wants your information, we hand it to them on a silver platter. You've been warned."

I assume any "free" service is provided that way because I, or my eyeballs, am the product. That being said, I use an adblocker in my browser. Since companies and governments cannot be trusted to do the right thing, as "the right thing" is a matter of interpretation, I accept a limit to my privacy or forgo the convenience of a "free" service.

J
 
  • Like
Reactions: Phuncz

PlayfulPhoenix

Founder of SFF.N
SFFLAB
Chimera Industries
Gold Supporter
Feb 22, 2015
1,052
1,990
The line where we went from free-loading on websites that were somehow sponsored to costs being covered by selling your profile is unclear to me, but it seems Google was one of the first... While I can't be mad at Google for this because they don't really hide this fact, more and more companies do and dodge their responsibility (in my opinion) of telling you straight they are selling your activity, without hiding it in their EULA...

Facebook, in my mind, is much worse in terms of their lack of transparency. But there's also the structural aspects of how the two do business: Google ultimately uses user data to serve ads in dedicated channels, and even if there's a certain ‘creepiness’ factor to the lengths they go to track behavior, all they’ll ever do is change what ads they’re pushing to you. Search will always work as it does, email will never work differently, and so forth.

Facebook, on the other hand, has already demonstrated that they have no issue with performing social experimentation and engineering in order to support their business. They've done broad-based tests across hundreds of thousands of users whereby the emotions of a subset were manipulated by disproportionately feeding them negative posts. They're actively curating content in a way that isolates individuals from news and information they don't prefer. And, of course, the integration of users with businesses is often surreptitiously used in a manner that implies endorsements or commercializes an individual's content, usually without their explicit consent.

Given that many people rely on Facebook as a tool to manage a huge portion of their social life, the repercussions of this are immense. Yet people don’t seem to understand this in their use of the service, because they aren't aware that Facebook has free license to do these things, and consequently takes advantage. And this is all compounded when you consider the lengths Facebook is going in order to drive up the amount of time using the platform (time is money for them) - usually at the expense of any and all socialization outside of it.

Even many, seemingly all, governments seem to hop on the bandwagon and don't care about a person's rights for privacy.

Governments will always want more information and more access. From their point of view, there’s no real downside in having it, since the cost of trying to make it useful is easily covered by national budgets. And the upside has the potential to be massive: such data can help uncover illicit activity, it can directly fuel geopolitical strategy and espionage, and it can (emphasis on can, not is) be a useful counterterrorism tool. Taking the perspective of a government, why wouldn't you try to gather as much information as possible?

The only real forces acting against this tendency are existing laws, and the outrage of citizens. In the United States, certain privacy and other protections are codified in some of our oldest and most enshrined laws/bills, and to this day we’re only beginning to understand what they provide to us in terms of privacy in the digital age. Opposing this is the tepid response we've seen by voters and politicians alike to the revelations disclosed by Edward Snowden et. al., which in and of itself does a lot to explain why little seems to have changed with respect to government operations in the digital space.

Why do I care, personally ? Because this world seems to be slowly changing into one where all non-conform behavior is being outlawed. For example, recently a few car insurance companies in Belgium were openly very content with their new project: tracking people's driving behavior in cars and base their insurance costs on how gently you drive.

I understand your frustration, but I think there are better examples of what you mean. I consider myself very privacy-minded, but this is actually an example where I would advocate for mandatory tracking of insured vehicles, not against.

The data collected by these devices for insurance companies isn't particularly valuable to anyone other than the driver and the insurer itself. If Progressive was hacked, and that data was dumped into the public, for example, it would hardly be the sort of data that could harm any one individual. In fact, it would only really be useful in the aggregate anyway, since perhaps some researchers or competitors could learn about driver behaviors in a way they otherwise wouldn't have been able to. So, the "cost" of this collection is low because both the risk and the potential damage are themselves low.

Now look at the positives. Without these trackers, car insurers have no idea how aggressive someone drives on a regular basis - they can't really tell if someone speeds consistently, brakes abruptly, takes turns far too quickly, and so forth. Yet, insurers know that these behaviors correlate very strongly with more accidents. Consequently, they're forced to not account for driving behaviors in the rates they charge, even though they matter a lot - meaning that safer drivers are effectively subsiding the rates of aggressive ones, since they don't appear any different to the insurer.

With these data trackers, this dynamic is all but eliminated - an insurance company not only knows if you're a safe driver, but is forced (due to competition) to offer you a lower insurance rate, since they know that you'll cost them less in the long run. This gives safe drivers a huge incentive to maintain their habits. Meanwhile, aggressive drivers (who drive in a manner that is tangibly more dangerous) are hit with higher rates, and thus have a huge incentive to change their behavior and drive safer.

So, through a simple and fair mechanism, you can now encourage safer drivers to keep it up, encourage riskier drivers to improve, and provide more competitive services for the price. Meanwhile, traffic accidents and fatalities go down, since everyone has newly realized, strong, and active incentives to drive in a safer manner.

That all seems like a very significant positive in the aggregate, compared to the negative of collecting that information. To me, at least, it's well worth it.

...Anyway, all of this to say that, although "normalcy" is never something we should try to muscle people into, optimizing societal outcomes through incentives (not rules) is actually a fairly brilliant way to go about realizing some sort of outcome, such as reducing traffic fatalities, or providing a more competitive and less risky insurance rate. Where the debate should lie, I feel, is in domains where information is much more costly, or where the outcome isn't obviously positive.

This brings up questions if we can really trust just any software or service with our data. More and more people are lured into installing apps only to find out later it was free because you were the product. Or that it was installing malware, like an iOS jailbreak from China a while ago.

Well, with hackers who are installing malware, bets will always be off, right? That’s an InfoSec issue more than a “privacy” issue insofar that we never trust hackers, but place at least some trust in the apps and devices and services we use. The privacy issue is one of how that trust works, what rights we have, how we can ensure that data exchange has consent and other protections, and so on.

Really, the best thing we could hope for in this realm are stringent laws that require clear disclosure about how someone might use certain kinds of information. Many countries have these sorts of rules in finance and healthcare. I don’t know the feasibility of universalizing them, though.

Of course, then you have to consider how these laws would interact internationally, and that's just a horrible mess that might never be solvable. Data in transit may only be as protected as the least regulated country it passes through.

At what point can we stop saying this is for the greater good and start saying this is right into Orwell's "1984" nightmare ?

I’d worry a lot more about bad actors than about governments. Governments have geopolitical motives that have a negligible impact on individuals. Companies and individuals usually have a motive of profit, reputation, or simply anger, and all of those are far more destructive to people.

With this whole privacy discussion, I think it’s helpful to consider it a game of incentives, and to see data as just another form of currency. People want information because it has value, and people want privacy because the disclosure of certain kinds of info means forgoing the value that it contains. Just as we regulate and provide safeguards for how money and goods/services are moved around and changed, I think that we need to consider information as just another tangible good, even if it’s technically an intangible 'thing'. Because when you think about how we regulate money:
  • If you own it, it's yours, you have total control of it, and nobody can claim it as their own.
  • If you exchange it, you have a right to know exactly what you're getting for it, and what the other party is getting in return.
  • If you give it, you forfeit all right to own and/or control it.
...that's all basically how we should probably regulate data, too.
 
  • Like
Reactions: Phuncz

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
With the vast number of methods of tracking user between sites (cookies, flash, javascript, 'hidden' images, many different browser fingerprinting methods, IP logging, etc) and the ubiquity of their usage, the only way to 'privately' browse the web is to use a regularly reset 'burner' VM image (and ideally multiple images with randomised setups), run noscript stringently, and tunnel all traffic through Tor. Anything less is more security theatre than an effective solution in actually avoiding tracking. This has been the case for quite some time.
Against a nation-state level attacker you'd need to go even further, utilising a physical connection that is mobile and changes IP, MAC, IMSI (for mobile devices), etc regularly. Tricks like the recently revealed satellite downlink disguise hack might help there, as would changing physical device rather than just fiddling with the MAC (due to quirks of adapters, operating system etc in handshaking timings etc). But in the case of nation-states, you're more worried about a concerted targeted attack than blanket capture, which generally is weak to actual physical surveillance methods which are vastly more effective than electronic ones (why go to the trouble of trying to crack Tor in real-time when you could have a guy sit behind you and video your screen with a telephoto lens).
Blanket surveillance is an unconscionable abuse, but worse it's utterly ineffective at actually capturing anything useful against the opponent's it's supposedly employed against.
 
  • Like
Reactions: Phuncz and |||

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,827
4,902
Excellent discussion here, very insightful.

I do take some small solace, however, in knowing that many implementations of big data harvesting are flawed or are of limited value - either due to time, limited information or space. For example, my Google profile has me pegged as a computer scientist due to the websites (like this one) that I frequent, rather than my actual occupation.
I've noticed this too, this does beg the question if this data isn't easily influenced. Instead of blocking tracking systems, why not feed them wrong data ? This would ruin the accuracy to a point it can't be trusted. I may sound like an anarchist at this point but I'm still very much against the level of automated, unlimited and uncontrolled tracking.

Facebook, in my mind, is much worse in terms of their lack of transparency.
Oh yes, I very much agree on this. I'm not on any social network (that I know of) and it bothers my friends why I'm not on site X or app Y. But I'm too pragmatic to be neutral in my reasoning, also I'm very aware of the (in my eyes) abusive and invasive privacy-shredding financial models most social networks have. Especially because most are a fad: two years ago it was a shame I wasn't on Facebook, this year it's sad I'm not on WhatsApp, in 2016 I guess it'll be Instagram. I don't like the replacing of social media with real social contact, it's sad to see many people that thought I was a nerd five years ago now are all dependant of their smartphone, tablet or laptop. But I'm still the outlier because I don't mindlessly follow the herd even if I object to it.

Facebook, on the other hand, has already demonstrated that they have no issue with performing social experimentation and engineering in order to support their business.
I'm trying to find an example if this was done before social media on such a large scale, but I'm coming up empty. It's indeed worrying.

Governments will always want more information and more access. From their point of view, there’s no real downside in having it, since the cost of trying to make it useful is easily covered by national budgets.
...
Governments have geopolitical motives that have a negligible impact on individuals. Companies and individuals usually have a motive of profit, reputation, or simply anger, and all of those are far more destructive to people.
While I see your point on the first part, a government is still made up of a bunch of people, easily influenced. In my own and neighbouring countries, I still see a lot of shortsighted decisions, woefully mismanaged multi-billion euro/dollar projects and obvious links between the government and companies. It's very hard to see the line where a political agenda stops and a company agenda starts (in my area).

The data collected by these devices for insurance companies isn't particularly valuable to anyone other than the driver and the insurer itself.
I agree with you, but would we ever sure this doesn't track routes and stops ? From that one could correlate when a person or family goes to and comes from work, who they visit in the weekends, which politically or commercially interesting locations are visited, if one does a non-punishable offence like affairs, publicly sensitive activities or secret hobbies. You can see how this could lead to being specific targets of (hard) selling and to the furthest extent: extortion of government officials or popular figures. Because at the top, who is keeping an eye on honesty ? Look how long it took for one man (Snowden) to come clean in a country that's full of special and secretive divisions. In a tiny country like Belgium this is a very real scenario.

Now look at the positives. Without these trackers, car insurers have no idea how aggressive someone drives on a regular basis - they can't really tell if someone speeds consistently, brakes abruptly, takes turns far too quickly, and so forth. Yet, insurers know that these behaviors correlate very strongly with more accidents. Consequently, they're forced to not account for driving behaviors in the rates they charge, even though they matter a lot - meaning that safer drivers are effectively subsiding the rates of aggressive ones, since they don't appear any different to the insurer.

So, through a simple and fair mechanism, you can now encourage safer drivers to keep it up, encourage riskier drivers to improve, and provide more competitive services for the price. Meanwhile, traffic accidents and fatalities go down, since everyone has newly realized, strong, and active incentives to drive in a safer manner.
That last part sounds like the marketing-speak that was used to persuade the insurance brokers from buying those GPS trackers from that company ;)

Continuing the off-topic: road warriors were last century's problem, not focussing on the road is all the rage now. Causes: smart phones, stress from home or work, completely saturated roads. But there's not much a GPS can do to solve that so it's just another way for the insurance broker to weasel out of paying.
Oh yes, it's common for people who don't consider themselves part of the Belgian community to not have insurance at all. You see a dirty, lowered VW Golf with a massive exhaust and random behavior ? You stay the eff away. I'm not worried about a speed demon in a new BMW M3 with sticky tires and carbon-ceramic brakes. I'm worried about a YOLO-tard in a rusty BMW 316 with completely worn tires and failing brakes.

So to me, this entire GPS tracking system is either a waste of money (meaning more expensive insurance or bankrupcy) or has another agenda than "improve road safety and insurance prices". Either way, the consumer still loses but we still aren't sure our privacy isn't being sold to the highest bidder.

With this whole privacy discussion, I think it’s helpful to consider it a game of incentives, and to see data as just another form of currency. People want information because it has value, and people want privacy because the disclosure of certain kinds of info means forgoing the value that it contains. Just as we regulate and provide safeguards for how money and goods/services are moved around and changed, I think that we need to consider information as just another tangible good, even if it’s technically an intangible 'thing'. Because when you think about how we regulate money:
  • If you own it, it's yours, you have total control of it, and nobody can claim it as their own.
  • If you exchange it, you have a right to know exactly what you're getting for it, and what the other party is getting in return.
  • If you give it, you forfeit all right to own and/or control it.
...that's all basically how we should probably regulate data, too.
While I refuse to pay with my privacy as a currency, I am open for direct or indirect payment with moneys. I've been advocating a shared subscription model, much like Google Contributor. While I have a problem with paying various sites 1€ or 1$ a month to surf ad-free and specifically tracking-free, I would be open to paying 10€ or 10$ a month for almost every site I visit.

Your proposition for treating data as a currency does sound interesting, but with many people ignorantly forfeiting their data for free services, this can hardly happen. Just look at how freemium games is ruining (or has ruined) mobile apps and games.
Ah yes, you want to track my health ? For free ?! OMG ! Or are you going to sell that data with my health insurance ?
That last part is still not common to consider amongst my fellow citizens.
 

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,827
4,902
I don't mind ads, as long as they don't track people or have ad-space that is sold online realtime (point of entry for malware). And I'd much rather pay than see ads :p
 

PlayfulPhoenix

Founder of SFF.N
SFFLAB
Chimera Industries
Gold Supporter
Feb 22, 2015
1,052
1,990
While I refuse to pay with my privacy as a currency, I am open for direct or indirect payment with moneys. I've been advocating a shared subscription model, much like Google Contributor. While I have a problem with paying various sites 1€ or 1$ a month to surf ad-free and specifically tracking-free, I would be open to paying 10€ or 10$ a month for almost every site I visit.

To be clear, what I meant was that I simply thing people should think about their privacy (or their personal data) more like anything else. I feel that people take a lot of the privacy they have for granted, and that such individuals often times yield a lot of their personal data to companies because they simply don't realize what they're handing over. But people never really have this problem with actual money - nobody defaults to being carefree about how they handle money (well, ok, almost nobody), and nobody buys something without having a reasonable understanding of how much money it will cost them.

My point is less about modality and more about mentality. Paying to use a website with dollars, and paying by being presented with ads, are exchanges that rely on different currencies. But you're still 'paying' in either scenario. People just don't seem to see it that way.

Your proposition for treating data as a currency does sound interesting, but with many people ignorantly forfeiting their data for free services, this can hardly happen. Just look at how freemium games is ruining (or has ruined) mobile apps and games.
Ah yes, you want to track my health ? For free ?! OMG ! Or are you going to sell that data with my health insurance ?
That last part is still not common to consider amongst my fellow citizens.

People can't reasonably expect to understand how all of their personal information is used because (again reverting back to economic rationalities) there is a dynamic of asymmetric information in these sorts of transactions. Developers and companies and their lawyers have a profound understanding of what information they get, and how valuable it is, because they create the agreements that users agree to, and they only have to think about their product. But this dynamic puts a massive burden on users to investigate the agreements of all the digital goods and services they use, which is patently ridiculous.

What the industry really needs are universal standards that apps and services can be certified against, which abstract complex considerations of data disclosure into a more accessible sliding scale that represents how potentially invasive something is. Akin to movie ratings or food health & safety ratings, for example.

Bad time to announce ads on SFFn? :p (just joking!)

I don't mind ads, as long as they don't track people or have ad-space that is sold online realtime (point of entry for malware). And I'd much rather pay than see ads :p

We'll avoid ads whenever possible, but if the day comes where our expenses necessitate ads, we will be diligent about choosing an implementation that isn't awful. There are ad networks out there that don't suck.

Contribution models are nice to think about as a matter of theory, but very few people are willing to pay for content that's expected to be free. They also don't tend to scale well - I struggle to think of a single website that operates by providing all content publicly, and then funds itself exclusively through voluntary donations.
 

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,827
4,902
People can't reasonably expect to understand how all of their personal information is used because (again reverting back to economic rationalities) there is a dynamic of asymmetric information in these sorts of transactions. Developers and companies and their lawyers have a profound understanding of what information they get, and how valuable it is, because they create the agreements that users agree to, and they only have to think about their product. But this dynamic puts a massive burden on users to investigate the agreements of all the digital goods and services they use, which is patently ridiculous.

What the industry really needs are universal standards that apps and services can be certified against, which abstract complex considerations of data disclosure into a more accessible sliding scale that represents how potentially invasive something is. Akin to movie ratings or food health & safety ratings, for example.
Indeed, good insight and interesting point of view. I feel we the consumers are being severely abused with the unintended ignorance of the masses. But will it last ? Will we become aware of the consequences or will we suffer up to the point that we realise that corporations have found other ways to monetize ?

It's what I think about the most about this topic: do we care too much about our privacy ? Or if we some day realise 'privacy' is a concept from the past, will it be worthless because every commercial entity is focused so much on data mining (becoming redundant and unsellable) that it stops being financially viable ? I can't believe data-mining is sustainable for the future, but maybe I used to think paying for website access was an unbelievable concept too.

The biggest problem I face is the peer pressure: friends asking and begging me to join Facebook, Whatsapp, Instagram, Snapchat, Twitter, whateves. While I stand on my principles, that I won't sell out on passing fads or be a part of the fake "social parades", it still bothers me people desire from me that I have these principles. Maybe I'm not YOLO enough.

I struggle to think of a single website that operates by providing all content publicly, and then funds itself exclusively through voluntary donations.
Wikipedia, private torrent trackers, hobbyist websites. While maybe not 100% voluntary, they do weigh very heavily upon donations.