Reply to thread

The only fully automatable solution we're aware of is the one that [USER=297]@Ceros_X[/USER] has mentioned, which essentially does post-processing whenever a post is made or updated. It would be the lowest effort and highest efficacy solution, but at a cost of rigidity - it only works one way, by replacing all links with referral links of a specific type.


It also isn't perfect in it's own right - it can't audit all referral links for all websites, and it may be defeatable with some redirect or URL-shortening trickery. We're not sure about that, though, because we have yet to test it.


Comparatively, all other solutions would be much more malleable - we could adapt them to what we think is the 'ideal' policy to uphold - but they would provide that at a cost of some degree of manual review and intervention, the workflow for which would look something like this for each instance of a link being reported:

  1. Someone would have to notice that a referral link is a referral. (Can you always tell? Do most users know what that does or doesn't look like? What if a shortened URL or redirect is used? Can we reliably depend on users to police each other, and do we want to? What about posts that are updated long after they were posted?)
  2. A moderator would have to be notified.
  3. A moderator would have to investigate and determine if the use of the referral code follows the established rules. (If relevancy or user history, what concrete guidelines can we provide to users? If supporter status, how do we audit posts given that users can change their supporter status over time?)
  4. Action would have to be taken, recorded, and reported. (What would be the appropriate action? How would we track it? How would we escalate rule-violators that repeat themselves? What about users that submit erroneous reports?)
  5. Particularly if the action was disciplinary, there may be remedial work to do afterwords. (Discussion with the user, explaining the rule violation, etc.; alternatively discussion with the reporter on whether or not the report itself was appropriate.)

The frequency of this workflow would roughly scale with user growth, though bear in mind that we'd have to go through it all for false alarms as well as with legitimate rule violations. I think there is some concern on our end that 10x the user base might lead to greater than 10x the incidence rate, given that as online communities grow they inevitably have a higher frequency of rogue/misbehaving users in real terms.




I'd like to emphasize this sentiment if only because it's an important one to account for. There's likely a difference between the 'ideal' policy we could use in a perfect world, and the policy that is actually best for SFF Forum given the realities of the community, online communities more generally, and the resources/capabilities we have to enforce the policies we create. That aspect is really what makes this conversation and policy decision so difficult.