• 6 Posts
  • 232 Comments
Joined 1Y ago
cake
Cake day: Jun 19, 2023

help-circle
rss

I guess relatively few people still turn to piracy, it seems like all these streaming platforms are massively increasing their prices knowing that their customers will try to rationalise the extra spending somehow.



If you have slower internet or don’t seed 24/7, I would recommend just focusing on seediing torrents with a low number of seeders. It doesn’t really matter if you leech the latest episode of a popular new TV series, as there will be so many other seeders, (many with a better capacity to seed). However, for something older or more niche your decision to seed or leech could determine whether someone else gets to enjoy that content.


The first game is much creepier than the second, I think due to a combination of the character designs, the writing and the general plot. The second game feels more akin to Danganronpa, in that the characters and setting are a bit surreal. Because it was a 3DS game, it also uses cartoony 3D models that make everything a bit lighter and less gritty than the original game. I haven’t played the third one yet (still need to get around to 100% completing the second game).


I found 999: Nine Hours, Nine Persons, Nine Doors to be very unsettling. I played it in bed at night with headphones on and it totally sucked me in. I guess this is a different type of horror to many of the games suggested here, which I personally don’t find scary.



Yes, people have had these existential crisis moments about piracy for many years. Just a couple of notable examples within my lifetime were the many issues of The Pirate Bay in the 2000s, the closure of KickassTorrents in the 2010s and RARBG’s shutdown last year. People panicked over the initial DeezLoader and YouTube Vanced project shutdowns too. Every single time, without fail, something new rises up whether it’s a direct clone or something entirely new. It’s not always as good initially, but I can’t really say any of these “crackdowns” have had a significant effect from my perspective.


Yes, the projects on GitHub were shutdown. That doesn’t mean the extension suddenly became “ineffective” or is no longer being worked on. It was simply cloned elsewhere and development continues. I think I am getting an idea of why you’re so pessimistic about piracy - you base your opinions on articles and forum speculation instead of lived experience. If you actually used this extension, you would know that at no point has it stopped working.


Almost all the ways to bypass news paywalls are currently ineffective.

What are you referring to here? Bypass Paywalls Clean is still being updated and covers a lot of stuff across multiple languages.





It’s funny how similar these shutdown messages always are. They never state the real reason why the project is ending and always have some weird sentence at the end recommending everyone use paid services, even though their entire project completely undermined them for years. I guess they are advised (threatened) to use certain language by whoever is pressuring them.


One of my university lecturers uses this, I could see it in his bookmarks while he cast his browser lol


You act like this is a universally confusing concept, when it’s only Americans who seem to have difficulty understanding that different countries have different laws and definitions. In any case, it was reported as solitary confinement in both the EU and US at the time so I’m not really sure what you guys are crying about.


That is sort of like complaining that people think of the US when they hear “school shooting”:

No it’s not, because in this case it was quite clearly solitary confinement in Sweden and Denmark. If you read that and thought “oh they mean US solitary confinement” then you are retarded.


Ok so I think what most people think about when they talk about solitary confinement is the US version

“Okay so I think what most people think about when they talk about Sweden and Denmark is the US”.


He was held in solitary confinement in both Sweden and Denmark. This was reported on at the time. I’m not sure why you’re trying to second-guess me when you clearly have zero knowledge about the history of this guy.


I believe it was because he failed to return to Sweden to serve his Pirate Bay sentence and instead remained in Cambodia where he was living at the time. There was an international warrant out for his arrest and when he was deported back to Sweden he was judged at risk of flight or further “criminal activities”. He was removed from solitary after a few months, so I’m not sure if he was put back there for his later, longer sentence of hacking.

EDIT: He was later held in solitary confinement in Denmark for at least 10 months while awaiting trial for hacking.


Nice bravado but he ultimately wasted years of his life in solitary confinement.

EDIT: Maybe not years. Certainly months. Actually it was over a year when you add the reported stints together.


I didn’t say you did. Your question implies that there must be a reason why the listing of resources in the megathread is considered “legal”. I am flipping the question around and asking you - why must this be the case? Do you believe there is something inherently “illegal” about the megathread? If so, what?



It still works for now and the project will be revived in some way, I’m sure. Nothing to worry about.


The author just insists that Israeli government genocide is bad and that the ordinary citizens are complicit. I think the implicit logic must be: bad people should be punished, depriving them of music punishes them. While it might satisfy a craving to hurt the bad guys, I think it’s much harder to claim that this would help stop the genocide.

It’s because the mindset of the people who make these types of arguments is rooted in childish ideas about human behaviour. It’s why younger people in particular are so big on cancel culture, because they still believe that taking away the toys magically changes the behaviour of adults out in the real world. What actually happens when you cut people off completely is that you lose access to all the outlets through which you can begin or maintain a dialogue. Who do these people turn to when you’re no longer talking to them? The culture warriors never get this far because they lack the life experience to understand how to navigate difficult relationships. Social media has unfortunately contributed significantly to the spread of this infantile mindset where “the world is full of good people and bad people, and if we disagree about something then you’re clearly one of the bad people and I am no longer talking to you”.


This is a moronic take, the kind of thing only some western Gen Z cancel culture warrior with no life experience would believe in. Radiohead understand that large numbers of their fans live in countries with questionable or outright authoritarian governments (they are massive in South America, for example). It would be very problematic if they started picking and choosing which of their fans was deserving of a live concert based on where they live or what kind of policies their government has been pursuing. Their music is something that unites people from all over the world and continuing to share it with everyone is the best thing they can do in this situation.


Yeah it’s pretty bad, I had to scroll for a while to come across the statistics I was interested in. Between 15% and 20% of Australians unlawfully stream content, while for downloading it’s up around the 65% mark. Unlawful live streaming of sport is around 25%, as is video game piracy. Music streaming has lower levels of unlawful access - only 12%. However music downloading is again much higher, around 62%. Most lawful content is delivered via streaming these days, so it makes sense to me that the numbers are lower for streaming and higher for downloading. These are all self-reported figures of course, so it’s likely that they are somewhat underestimating the scale of unlawful behaviours.


It is pretty common in Australian media, piracy is quite widespread here so I don’t think there’s really a social taboo around it. Public figures usually reference it in a joking way to protect themselves but we all know what they’re really saying and doing.


It’s pretty easy to use. The only challenge for newcomers is setting up port-forwarding, since some users won’t share their collection with people who have their ports blocked. You don’t have to open your ports or share your music collection, but it is leeching and considered a dick move by some.


This is just an argument for ceding space to conservatives, which makes them seem more prevalent than they are, because they’ve driven the opposition away.

The irony of a Beehaw user trying to making this argument in a Beehaw thread…

Whether social media is essential to life or not, it’s a normal part of modern life, and telling people to avoid it is no different than telling people to avoid bars or clubs if they don’t want to be harassed. It’s just victim blaming.

The correct comparison would be that it is like returning to the same bar, on the same day, at the same time when you know the people who have harassed you previously will be there. It is not victim blaming to suggest avoiding that particular bar if attending it is causing the person to have a mental breakdown. Giving choice, power and control back to the victim is not the same as blaming them for their situation. Again, we are having this conversation in a Beehaw thread; if you don’t understand the significance of that then I don’t know what else to say.


I would encourage you to read OP’s post again and ask yourself why the only top level reply in the thread might seem to be addressing an idea rather than a person.


Have you tried the various Firefox forks? If one of your primary problems with Firefox is a belief that they are “evil like Google” then switching to a browser developed by Google and further entrenching their monopoly on the market is a very strange decision.


Is it actually viable for ordinary people to purchase small amounts of cryptocurrency for the sole purposes of making more private purchases online or taking advantage of cryptocurrency discounts? All the coverage on them is about large-scale investing which makes me feel like no one buying and selling actually has any interest in cryptocurrencies as an alternative currency. Instead it’s just about getting rich, which is a massive turn off.


And now, you’re changing the subject.

I don’t know why you keep saying this. Do you still think I’m someone else you replied to earlier?


They weren’t “going off on a rant”, though. They were responding to something you said and it was a pretty good reply too with a lot of detail and thought put into it. If you didn’t want to discuss the election, you shouldn’t have quoted a sentence about the election. Getting all snide and dismissive after the fact is very strange behaviour on your part.



I see a lot of people reacting negatively to minorities and leftists breaking down on social media

The thing about that though is right wingers will push and push and push. They will spend all day every day harassing someone until they finally break down and have an outburst.

If you don’t spend all day on social media, you can’t be harassed on social media all day. If an online space is so toxic that you are “breaking down” then you have a responsibility to yourself to reassess whether it is actually healthy for you to be spending time in that space. Don’t get hung up on some kindergarten ideas about “fairness” or “they started it”, take a step back and realise that you actually have agency and choice. It is very strange to me that people complain about how toxic social media is and then change absolutely nothing about their own behaviour. Social media is non-essential to life, you do not need to be using it. Particularly not if it is causing you severe mental stress.


Why are you going off on some rant about how to win the election? What you’re saying is correct about how to approach the election and that’s not the subject of this post or this conversation.

You directly quoted and replied to a sentence referencing Trump’s presidential campaign. It is perfectly reasonable for people to assume you are interested in discussing the upcoming presidential election.


I have that problem too but I find using a Chromium-based browser is the solution. I doubt you actually need to use Chrome for these websites you’re having problems with.


Yes, Trump’s campaign is contributing to the effectiveness of the “weird” attack by very obviously allowing themselves to be triggered by it. Running counter-ads where they try to co-opt “weird” and use it to describe the Democrats is a massive fail on their part.





In spring, 2018, Mark Zuckerberg invited more than a dozen professors and academics to a series of dinners at his home to discuss how Facebook could better keep its platforms safe from election disinformation, violent content, child sexual abuse material, and hate speech. Alongside these secret meetings, Facebook was regularly making pronouncements that it was spending hundreds of millions of dollars and hiring thousands of human content moderators to make its platforms safer. After Facebook was widely blamed for the rise of “fake news” that supposedly helped Trump win the 2016 election, Facebook repeatedly brought in reporters to examine its election “war room” and explained what it was doing to police its platform, which famously included a new “Oversight Board,” a sort of Supreme Court for hard Facebook decisions. At the time, Joseph and I published a deep dive into how Facebook does content moderation, an astoundingly difficult task considering the scale of Facebook’s userbase, the differing countries and legal regimes it operates under, and the dizzying array of borderline cases it would need to make policies for and litigate against. As part of that article, I went to Facebook’s Menlo Park headquarters and had a series of on-the-record interviews with policymakers and executives about how important content moderation is and how seriously the company takes it. In 2018, Zuckerberg published a manifesto stating that “the most important thing we at Facebook can do is develop the social infrastructure to build a global community,” and that one of the most important aspects of this would be to “build a safe community that prevents harm [and] helps during crisis” and to build an “informed community” and an “inclusive community.” Several years later, Facebook has been overrun by AI-generated spam and outright scams. Many of the “people” engaging with this content are bots who themselves spam the platform. Porn and nonconsensual imagery is easy to find on Facebook and Instagram. We have reported endlessly on the proliferation of paid advertisements for drugs, stolen credit cards, hacked accounts, and ads for electricians and roofers who appear to be soliciting potential customers with sex work. Its own verified influencers have their bodies regularly stolen by “AI influencers” in the service of promoting OnlyFans pages also full of stolen content. Meta still regularly publishes updates that explain what it is doing to keep its platforms safe. In April, it launched “new tools to help protect against extortion and intimate image abuse” and in February it explained how it was “helping teens avoid sextortion scams” and that it would begin “labeling AI-generated images on Facebook, Instagram, and Threads,” though the overwhelming majority of AI-generated images on the platform are still not labeled. Meta also still publishes a “Community Standards Enforcement Report,” where it explains things like “in August 2023 alone, we disabled more than 500,000 accounts for violating our child sexual exploitation policies.” There are still people working on content moderation at Meta. But experts I spoke to who once had great insight into how Facebook makes its decisions say that they no longer know what is happening at the platform, and I’ve repeatedly found entire communities dedicated to posting porn, grotesque AI, spam, and scams operating openly on the platform. Meta now at best inconsistently responds to our questions about these problems, and has declined repeated requests for on-the-record interviews for this and other investigations. Several of the professors who used to consult directly or indirectly with the company say they have not engaged with Meta in years. Some of the people I spoke to said that they are unsure whether their previous contacts still work at the company or, if they do, what they are doing there. Others have switched their academic focus after years of feeling ignored or harassed by right-wing activists who have accused them of being people who just want to censor the internet. Meanwhile, several groups that have done very important research on content moderation are falling apart or being actively targeted by critics. Last week, Platformer reported that the Stanford Internet Observatory, which runs the Journal of Online Trust & Safety is “being dismantled” and that several key researchers, including Renee DiResta, who did critical work on Facebook’s AI spam problem, have left. In a statement, the Stanford Internet Observatory said “Stanford has not shut down or dismantled SIO as a result of outside pressure. SIO does, however, face funding challenges as its founding grants will soon be exhausted.” (Stanford has an endowment of $36 billion.) Following her departure, DiResta wrote for The Atlantic that conspiracy theorists regularly claim she is a CIA shill and one of the leaders of a “Censorship Industrial Complex.” Media Matters is being sued by Elon Musk for pointing out that ads for major brands were appearing next to antisemitic and pro-Nazi content on Twitter and recently had to do mass layoffs. “You go from having dinner at Zuckerberg’s house to them being like, yeah, we don’t need you anymore,” Danielle Citron, a professor at the University of Virginia’s School of Law who previously consulted with Facebook on trust and safety issues, told me. “So yeah, it’s disheartening.” It is not a good time to be in the content moderation industry. Republicans and the right wing of American politics more broadly see this as a deserved reckoning for liberal leaning, California-based social media companies that have taken away their free speech. Elon Musk bought an entire social media platform in part to dismantle its content moderation team and its rules. And yet, what we are seeing on Facebook is not a free speech heaven. It is a zombified platform full of bots, scammers, malware, bloated features, horrific AI-generated images, abandoned accounts, and dead people that has become a laughing stock on other platforms. Meta has fucked around with Facebook, and now it is finding out. “I believe we're in a time of experimentation where platforms are willing to gamble and roll the dice and say, ‘How little content moderation can we get away with?,'” Sarah T. Roberts, a UCLA professor and author of Behind the Screen: Content Moderation in the Shadows of Social Media, told me. In November, Elon Musk sat on stage with a New York Times reporter, and was asked about the Media Matters report that caused several major companies to pull advertising from X: “I hope they stop. Don’t advertise,” Musk said. “If somebody is going to try to blackmail me with advertising, blackmail me with money, go fuck yourself. Go fuck yourself. Is that clear? I hope it is.” There was a brief moment last year where many large companies pulled advertising from X, ostensibly because they did not want their brands associated with antisemitic or white nationalist content and did not want to be associated with Musk, who has not only allowed this type of content but has often espoused it himself. But X has told employees that 65 percent of advertisers have returned to the platform, and the death of X has thus far been greatly exaggerated. Musk spent much of last week doing damage control, and X’s revenue is down significantly, according to Bloomberg. But the comments did not fully tank the platform, and Musk continues to float it with his enormous wealth. This was an important moment not just for X, but for other social media companies, too. In order for Meta’s platforms to be seen as a safer alternative for advertisers, Zuckerberg had to meet the extremely low bar of “not overtly platforming Nazis” and “didn’t tell advertisers to ‘go fuck yourself.’” UCLA’s Roberts has always argued that content moderation is about keeping platforms that make almost all of their money on advertising “brand safe” for those advertisers, not about keeping their users “safe” or censoring content. Musk’s apology tour has highlighted Roberts’s point that content moderation is for advertisers, not users. “After he said ‘Go fuck yourself,’ Meta can just kind of sit back and let the ball roll downhill toward Musk,” Roberts said. “And any backlash there has been to those brands or to X has been very fleeting. Companies keep coming back and are advertising on all of these sites, so there have been no consequences.” Meta’s content moderation workforce, which it once talked endlessly about, is now rarely discussed publicly by the company (Accenture was at one point making $500 million a year from its Meta content moderation contract). Meta did not answer a series of detailed questions for this piece, including ones about its relationship with academia, its philosophical approach to content moderation, and what it thinks of AI spam and scams, or if there has been a shift in its overall content moderation strategy. It also declined a request to make anyone on its trust and safety teams available for an on-the-record interview. It did say, however, that it has many more human content moderators today than it did in 2018. “The truth is we have only invested more in the content moderation and trust and safety spaces,” a Meta spokesperson said. “We have around 40,000 people globally working on safety and security today, compared to 20,000 in 2018.” Roberts said content moderation is expensive, and that, after years of speaking about the topic openly, perhaps Meta now believes it is better to operate primarily under the radar. “Content moderation, from the perspective of the C-suite, is considered to be a cost center, and they see no financial upside in providing that service. They’re not compelled by the obvious and true argument that, over the long term, having a hospitable platform is going to engender users who come on and stay for a longer period of time in aggregate,” Roberts said. “And so I think [Meta] has reverted to secrecy around these matters because it suits them to be able to do whatever they want, including ramping back up if there’s a need, or, you know, abdicating their responsibilities by diminishing the teams they may have once had. The whole point of having offshore, third-party contractors is they can spin these teams up and spin them down pretty much with a phone call.” Roberts added “I personally haven’t heard from Facebook in probably four years.” Citron, who worked directly with Facebook on nonconsensual imagery being shared on the platform and system that automatically flags nonconsensual intimate imagery and CSAM based on a hash database of abusive images, which was adopted by Facebook and then YouTube, said that what happened to Facebook is “definitely devastating.” “There was a period where they understood the issue, and it was very rewarding to see the hash database adopted, like, ‘We have this possible technological way to address a very serious social problem,’” she said. “And now I have not worked with Facebook in any meaningful way since 2018. We’ve seen the dismantling of content moderation teams [not just at Meta] but at Twitch, too. I worked with Twitch and then I didn’t work with Twitch. My people got fired in April.” “There was a period of time where companies were quite concerned that their content moderation decisions would have consequences. But those consequences have not materialized. X shows that the PR loss leading to advertisers fleeing is temporary,” Citron added. “It’s an experiment. It’s like ‘What happens when you don’t have content moderation?’ If the answer is, ‘You have a little bit of a backlash, but it’s temporary and it all comes back,’ well, you know what the answer is? You don’t have to do anything. 100 percent.” I told everyone I spoke to that, anecdotally, it felt to me like Facebook has become a disastrous, zombified cesspool. All of the researchers I spoke to said that this is not just a vibe. “It’s not anecdotal, it’s a fact,” Citron said. In November, she published a paper in the Yale Law Journal about women who have faced gendered abuse and sexual harassment in Meta’s Horizon Worlds virtual reality platform, which found the the company is ignoring user reports and expects the targets of this abuse to simply use a “personal boundary” feature to ignore it. The paper notes that “Meta is following the nonrecognition playbook in refusing to address sexual harassment on its VR platforms in a meaningful manner.” “The response from leadership was like ‘Well, we can’t do anything,’” Citron said. “But having worked with them since 2010, it’s like ‘You know you can do something!’ The idea that they think that this is a hard problem given that people are actually reporting this to them, it’s gobsmacking to me.” Another researcher I spoke to, who I am not naming because they have been subjected to harassment for their work, said “I also have very little visibility into what’s happening at Facebook around content moderation these days. I’m honestly not sure who does have that visibility at the moment. And perhaps both of these are at least partially explained by the political backlash against moderation and researchers in this space.” Another researcher said “it’s a shitshow seeing what’s happening to Facebook. I don’t know if my contacts on the moderation teams are even still there at this point.” A third said Facebook did not respond to their emails anymore. Not all of this can be explained by Elon Musk or by direct political backlash from the right. The existence of Section 230 of the Communications Decency Act means that social media platforms have wide latitude to do nothing. And, perhaps more importantly, two state-level lawsuits that have made their way to the Supreme Court that allege social media censorship means that Meta and other social media platforms may be calculating that they could be putting themselves at more risk if they do content moderation. The Supreme Court’s decision on these cases is expected later this week. The reason I have been so interested in what is happening on Facebook right now is not because I am particularly offended by the content I see there. It’s because Facebook’s present—a dying, decaying, colossus taken over by AI content and more or less left to rot by its owner—feels like the future, or the inevitable outcome, of other social platforms and of an AI-dominated internet. I have been likening zombie Facebook to a dead mall. There are people there, but they don’t know why, and most of what’s being shown to them is scammy or weird. “It’s important to note that Facebook is Meta now, but the metaverse play has really fizzled. They don’t know what the future is, but they do know that ‘Facebook’ is absolutely not the future,” Roberts said. “So there’s a level of disinvestment in Facebook because they don’t know what the next thing exactly is going to be, but they know it’s not going to be this. So you might liken it to the deindustrialization of a manufacturing city that loses its base. There’s not a lot of financial gain to be had in propping up Facebook with new stuff, but it’s not like it disappears or its footprint shrinks. It just gets filled with crypto scams, phishing, hacking, romance scams.” “And then poor content moderation begets scammers begets this useless crap content, AI-generated stuff, uncanny valley stuff that people don’t enjoy and it just gets worse and worse,” Roberts said. “So more of that will proliferate in lieu of anything that you actually want to spend time on.”
fedilink