It wasn’t too long after Twitter and Facebook moved beyond the early adopter crowd and into the mainstream public that studio executives realized they had an issue on their hands. Twitter in particular caused several fits of furious hand-wringing as even in the early days it attracted more real-time, immediate conversations.
Hollywood’s movers and shakers worried their finely-crafted marketing campaigns were being undone almost immediately by those sharing instantaneous reactions on social media. Opening weekend was no longer a gimme for movies with popular stars and massive TV ad budgets, as a 2009 Los Angeles Times story pointed out.
The phenomenon was often labeled a “Twitter problem” as if the platform itself was the culprit. More accurately this was a word-of-mouth problem, with the platform merely accelerating the spread of those opinions, which otherwise would have waited until Sunday or Monday to be shared. Word of mouth, as many companies across industries were learning, had simply shifted online.
Now, it seems, studios actually have a Twitter problem.
Two recent stories have shown the movie industry is just as vulnerable to manipulation of online conversations as we now realize our political system has proven to be.
First, Morten Bay, a researcher at University of Southern California’s Center for the Digital Future released a report analyzing online conversations around 2017’s Star Wars: The Last Jedi. Bay found evidence of evidence of “deliberate, organized political influence measures disguised as fan arguments.” Those measures were aimed at sowing discord within media and fan communities, both creating and reinforcing the narrative that the movie was incredibly divisive.
That’s incredibly similar to the tactics employed by the same type of groups in the months leading up to the 2016 presidential election, when Russian troll farms amplified key events with a more sinister – and therefore more divisive – angle. The goal was to harden people’s opinions by playing to prejudices and fears.
Second, as reported by Buzzfeed, it seems a group of the more ardent members of Lady Gaga’s fanbase have been actively working to bad mouth Venom in advance of opening weekend. Their reason for doing so? To suppress the opening weekend box-office for that movie in order to help A Star Is Born, in which Gaga stars. They’ve been creating fake accounts and either amplifying or creating bad buzz for Venom to see if they can narrow the gap between the $30 million expected opening for ASB and the $60 million for Venom.
Bots and fake accounts have been a problem Twitter has tried to tackle in various ways and to varying degrees of success in recent years. This past July it announced it would be removing accounts it had previously just locked, causing many people – especially celebrities and other notable figures – to see sometimes dramatic drops in followers as accounts went from simply inactive to non-existent. Facebook engaged in a similar purging of fake accounts, removing 1.3 billion accounts in six months.
A recent Pew study showed the vast majority of links shared on Twitter came from bots and the problem of those accounts skewing sentiment and engagement data is one being increasingly felt by consumer goods companies. The Pew numbers came about a year after a report saying some 15 percent of Twitter accounts were bots, which in 2017 would have put the actual number around 48 million accounts.
It’s not as if this is the first time a consumer-facing company has had to deal with these sorts of issues. Amazon and other online retailers have long been flooded with fake reviews and comments, some negative and some positive depending on the intent and desired goal. Despite banning buying product reviews, a Washington Post analysis from earlier this year indicates it’s still a thriving, pervasive practice.
Earlier this year the issue of fake accounts came into sharp relief around the movie Gotti. The John Travolta-starring mob drama had a 0 percent critical rating on Rotten Tomatoes, with not a single one of the nearly 30 reviews linked to from the site being positive. But the audience rating on the site was much higher, reaching 80 percent right after release. Analysis of those reviews found many glowing comments were left by people who had just joined the site days prior and who went on to review just one movie: Gotti. That’s certainly suspicious, though the Gotti marketing team denied any involvement in a coordinated campaign.
Outside of what kind of manipulation of online comments and conversation studios and producers may themselves be engaging in, the question remains: How can studios counter the sorts of coordinated attacks being launched against them?
- Understand the extent of the problem. TwitterAudit and a few other services will look for unusual patterns in an account’s followers such as unusual following/followers ratios, repeated usage of the same terminology, frequent links to the same sites and other warning signs. If you have the bandwidth you can then report those accounts as spam or bots, but that may not do much. At the very least if you know 18 percent of your followers are bots you can report on a more realistic number.
- Get proactive with your countermeasures. If you see there’s a clear misinformation campaign being waged against you, fight back. Get out there and engage your *actual* fans, enabling and encouraging them to spread a more positive message. Call out the fact you’re being targeted, providing some evidence so people don’t think you’re just complaining about no one liking the movie you put out.
- Have some fun with it. This one may not be for the faint of heart, but it would be super-fun if some studio released an online ad or TV spot proclaiming a movie to be “The #1 movie targeted by spam accounts on Twitter” in the same way studios crow about being “The #1 comedy in America.” Note that this is different from the defiant, mean attitude taken by the producers of Gotti, which just labeled legitimate critics who didn’t care for the movie as “trolls.”
Beyond that, studios – like other consumer goods companies – are largely at the mercy of the platforms themselves to keep things tidy. If Twitter doesn’t think an account is a bot, there’s not a whole lot to do. It’s similar to how many women and minorities are frustrated and angry accounts that repeatedly target them with misogynistic, hateful messages somehow fail to meet the official standards for abusive behavior.
More broadly, bots and fake accounts aren’t the only problem studios and other companies face when it comes to how word of mouth spreads on social networks and what conversations are surfaced to readers. Facebook and Instagram’s reliance on an algorithmic feed means the system is deciding what to display and what to suppress, meaning an individual is only receiving part of the picture. If, based on their own habits as well as those of their friends, that means stories that lean more negatively are those making it through the filter, that’s going to influence their perception of the product.
So it’s not just fake accounts and bots studios have to fight through, it’s the effects of the News Feed algorithm itself as well as other filters like it. That’s a problem that can’t be fixed or addressed, but it’s one that has a similar level of influence on the social conversation as any coordinated attack, be it from rabid fans rooting for your failure or those who seek to push us further apart culturally.
The very real cybersecurity threats faced by financial institutions every day may indeed mean trouble for the U.S. and global economies, especially if some nation-state actor is willing to shoot themselves in the foot if it means taking down a rival. Just as dangerous to the economy, though, is the kind of malicious spreading of misinformation that can tank stock prices, impact how products fail or succeed and more. It’s that kind of threat that many individual corporations and businesses may not prepared to fight it.