PR-wise, social media has really had a rough few years. After it was somewhat naively triumphed as an unambiguous force for good in the wake of the Arab Spring, people are waking up to its dangers. Weâve already covered the inconvenient truth that our brains may not be evolved enough to cope with it, and the awkward realisation that fake news and trolling could be a feature rather than a bug â but itâs hard not to have some sympathy for the companies struggling with the scale of a sociological experiment thatâs unprecedented in human history.
Every day, over 65 yearsâ worth of video footage is uploaded to YouTube. Over 350 million photos are posted on Facebook. âHundreds of millionsâ of tweets are sent, the majority of which are ignored.
There was one we knew was a terrorist â he was on the most wanted list, If you followed him on Twitter, Twitter would recommend other terrorists
Clint Watts, FBI
All of these statistics are at least a year out of date â the companies have broadly come to the collective conclusion that transparency isnât actually an asset â so itâs almost certain that the numbers are actually much higher. But even with these lower figures, employing the number of humans required to moderate all this content effectively would be impossible, so artificial intelligence does the heavy lifting. And that can spell trouble.
If youâre skeptical about the amount of work AI now does for social media, this anecdote from former FBI agent Clint Watts should give you pause for thought. Watts and his team were tracking terrorists on Twitter. âThere was one we knew was a terrorist â he was on the most wanted list,â Watts explained during a panel discussion at Mozillaâs Mozfest. âIf you followed him on Twitter, Twitter would recommend other terrorists.â
When Watts and his team highlighted the number of terrorists on the platform to Twitter, the company was evasive. âTheyâd be, âyou donât know that,ââ Watts said. âActually, your algorithm told me theyâre on your platform â thatâs how we figured it out. They know the location and behind the scenes they know youâre communicating with people who look like you and sound like you."
At its heart, this is the problem with all recommendation algorithms for social media: because most of us donât use social media like the FBI, itâs a fairly safe bet that you follow things because you like them, and if you like them it follows that you would also enjoy things that are similar.
Tracking the wrong metrics
This reaches its unfortunate end state with YouTube: a company that measures success largely on the number of videos consumed and the time spent watching. It doesnât really matter what youâre absorbing, just that you are.
YouTubeâs algorithms exploit this mercilessly, and there are coal-mine canaries raising the alarm on this. Guillaume Chaslot is a former YouTube software engineer who founded AlgoTransparency: a bot that follows 1,000 channels on YouTube every day to see how its choices affect the siteâs recommended content. Itâs an imperfect solution, but in the absence of actual transparency from Google, it does a pretty good job of shining a light on how the company is influencing young minds. And itâs not always pretty.
âThe day before the [Pittsburgh] synagogue attack, the video that was most recommended was a David Icke video about George Soros controlling the worldâs money, shared to 40 channels, despite having only 800 views,â Chaslot told an audience on the Mozfest AI panel.
We checked later, and heâs right: hereâs the day on AlgoTransparency, although clicking through now shows that its been watched over 75,000 times. While it would be a pretty big leap to associate a synagogue attack with YouTube pushing a conspiracy theory about a prominent Jewish billionaire â especially a video that appears to have, comparatively speaking, bombed at the time â itâs not a good look for Google.
[IMG alt="eQANrXZQpWKiGNXp5SYQQP" width="690px" height="388px"]https://cdn.mos.cms.futurecdn.net/eQANrXZQpWKiGNXp5SYQQP.jpg[/IMG]
Albotransparency is a bot that attempts to unpick YouTubeâs recommendation algorithm
âIt makes sense from from the algorithmic point of view, but from the society point of view, to have like an algorithm deciding whatâs important or not? It doesnât make any sense,â Chaslot told us in an interview after the panel. Indeed, the algorithm is hugely successful in terms of growth, but as others have reported, it has a tendency to push people to the extremes as this New York Times experiment demonstrates.
It seems as if you are never âhard coreâ enough for YouTubeâs recommendation algorithm. Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons
Zeynep Tufekci
âIt seems as if you are never âhard coreâ enough for YouTubeâs recommendation algorithm,â wrote the author Zeynep Tufekci in the piece. âVideos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.â
Some of us have the willpower to walk away, but an algorithm trained on billions of people has gotten pretty good at keeping others on the hook for one last video. âFor me,YouTube tries to push plane landing videos because they have a history of me watching plane landing videos,â says Chaslot. âI donât want to watch plane landing videos, but when I see one I canât restrain myself from clicking on it,â he laughs.
Provoking division
Exploiting human attention isnât just good for lining the pockets of social media giants and the YouTube stars who seem to have stumbled upon the secret formula of viral success. Itâs also proved a handy tool for terrorists spreading propaganda and nation states looking to sow discord throughout the world. The Russian political adverts exposed in the wake of the Cambridge Analytica scandal were curiously non-partisan in nature, seeking to stir conflict between groups, rather than clearly siding with one party or another.
And just as YouTubeâs algorithm finds divisive extremes get results, so have nation states. âItâs one part human, one part tech,â Watts told TechRadar after the panel discussion was over. âYou have to understand the humans in order to be duping them, you know, if youâre trying to influence them with disinformation or misinformation.â
You have to understand the humans in order to be duping them, you know, if youâre trying to influence them with disinformation or misinformation.
Clint Watts, FBI
Russia has been particularly big on this: its infamous St Petersburg âtroll factoryâ grew from 25 to over 1,000 employees in two years. Does Watts think that nation states have been surprised at just how effective social media has been at pushing political goals?
âI mean, Russia was best at it,â he says. âTheyâve always understood that sort of information warfare and they used it on their own populations. I think it was more successful than they even anticipated.
âLook, it plays to or authoritarians and itâs used either to suppress in repressive regimes or to mess with liberal democracies. So, yeah, I mean cost to benefit its itâs the next extension of of cyberwarfare.â
Exploiting the algorithms
Although the algorithms that explain why posts, tweets and videos sink or swim are kept completely under wraps (Chaslot says that even his fellow YouTube programmers couldnât explain why one video may be exploding), nation states have the time and resources to figure it out in a way that regular users just donât.
âBig state actors â the usual suspects â they know how the algorithms works, so theyâre able to impact it much better than individual YouTubers or people who watch YouTube,â Chaslot says. For that reason, he would like to see YouTube make its algorithm a lot more clear: after all, if nation states are already gaming it effectively, then whatâs the harm in giving regular users a fairer roll of the dice?
A lot of alt-right conspiracy theories get extremely amplified by the algorithm, but they still complain about being censored, so reality doesnât matter to them
Guillaume Chaslot, AlgoTransparency
Itâs not just YouTube, either. Russian and Iranian trouble makers have proved effective at gaming Facebookâs algorithms, according to Chaslot, particularly taking advantage of its preference for pushing posts from smaller groups. âYou had an artificial intelligence that says, âHey, when you have a small group youâre very likely to be interested in what it posts.â So they created these hundreds of thousands of very tiny groups that grew really fast.â
Why have social media companies been reluctant to tackle their algorithmic issues? Firstly, as anybody who has worked for a website will tell you, problems are prioritised according to size, and in pure numbers, these are small fry. As Chaslot explains, if for example 1% of users get radicalized by extreme content, or made to believe conspiracy theories, well, itâs just 1%. Thatâs a position itâs very easy to empathise with â until you remember that 1% of two billion is 20 million.
[IMG alt="5826f749f5866f01c5c428364e67626c" width="690px" height="388px"]https://cdn.mos.cms.futurecdn.net/5826f749f5866f01c5c428364e67626c.jpg[/IMG]
Censorship and oppression can be powerful tools in the hands of propagandists
But more than that, how can you measure mental impact? Video watch time is easy, but how can you tell if a video is influencing somebody for the worse until they act upon it? Even then, how can you prove that it was that video, that post, that tweet that pushed them over the edge? âWhen I talk to some of the Googlers, they were like âsome people having fun watching flat Earth conspiracy theories, they find them hilariousâ, and thatâs true,â says Chaslot. âBut some of them are also in Nigeria where Boko Haram uses a flat Earth conspiracy to go and shoot geography teachers.â
Aside from that, thereâs also the difficulty of how much social media companies intervene. One of the most powerful weapons in the propagandistâs arsenal is to claim that theyâre being censored, and doing so would play directly into their hands.
âWe see alt-right conspiracy theorists saying that they are being decreased on YouTube, which is absolutely not true,â says Chaslot. âYou can see it on AlgoTransparency: a lot of alt-right conspiracy theories get extremely amplified by the algorithm, but they still complain about being censored, so reality doesnât matter to them.â
They can change their terms of service all they want, [but] the manipulators are always going to dance inside whatever the changes are
Clint Watt, FBI
Despite this, the narrative of censorship and oppression has even been picked up by the President of the United States, so how can companies rein in their algorithms in such a way that isnât seen to be disguising a hidden agenda?
âTheyâre in a tough spot,â concedes Watt. âThey canât really screen news without being seen as biased, and their terms of service is really only focused around violence or threats of violence. A lot of this is like mobilising to violence, maybe, but itâs not specifically like âgo attack this personâ. They can change their terms of service all they want, [but] the manipulators are always going to dance inside whatever the changes are.â
This last point is important, and social networks are constantly amending their terms of service to catch out new issues as they arise, but inevitably they canât catch everything. âYou canât flag a video because itâs untrue,â says Chaslot. âI mean they had to make a specific rule in the terms of service saying âyou canât harass survivors of mass shootingsâ. It doesnât make sense. You have to make rules for everything and then take down things.â
Can we fix it?
Despite this, Watts believes that social media companies are beginning to take the various problems seriously. âI think Facebookâs moved a long way in a very short time,â he says, although he believes companies may be reaching the limits of what can be done unilaterally.
âTheyâll hit a point where they canât do much more unless you have governments and intelligence services cooperating with the social media companies saying âwe know this account is not who they say they areâ and youâre having a little bit of that in the US, but itâll have to grow just like we did against terrorism. This is exactly what we did against terrorism.â
From the regulatorsâ perspective, they donât understand tech as well as they understand donuts and tobacco
Clint Watts, FBI
Watts doesnât exactly seem optimistic of regulatorsâ ability to get on top of the problem, though. âFrom the regulatorsâ perspective, they donât understand tech as well as they understand donuts and tobacco,â he says. âWe saw that when Mark Zuckerberg testified to the senate of the United States. There were very few that really understood how to ask him questions.
âThey really donât know what to do to not kill the industry. And certain parties want the industry killed so they can move their audiences to apps, so they can use artificial intelligence to better control the minds of their supporters."
FBI agent Clint Watts says the US Senateâs questioning of Mark Zuckerberg showed how little regulators understand about technology
Not that this is all on government: far from it. âWhat was Facebookâs thing? âMove fast and break things?â And they did, they broke the most important thing: trust. If you move so fast that you break trust, you donât have an industry. Any industry you see take off like a rocket, Iâm always waiting to see it come down like a rocket too.â
There is one positive to take from this article though, and itâs that the current tech and governmental elite are being replaced by younger generations that seem more aware of internet pitfalls. As Watts says, young people are better at spotting fake information than their parents, and they give privacy a far higher priority than those of us taken in by the early social movers and shakers.
âAnecdotally, I mostly talk to old people in the US and I give them briefings,â says Watts. âTheir immediate reaction is âweâve got to tell our kids about this.â I say: âno, no â your kids have to tell you about this.ââ
[ul]
[li]How to delete your Facebook account[/li][/ul]
Continue readingâŚ
Every day, over 65 yearsâ worth of video footage is uploaded to YouTube. Over 350 million photos are posted on Facebook. âHundreds of millionsâ of tweets are sent, the majority of which are ignored.
There was one we knew was a terrorist â he was on the most wanted list, If you followed him on Twitter, Twitter would recommend other terrorists
Clint Watts, FBI
All of these statistics are at least a year out of date â the companies have broadly come to the collective conclusion that transparency isnât actually an asset â so itâs almost certain that the numbers are actually much higher. But even with these lower figures, employing the number of humans required to moderate all this content effectively would be impossible, so artificial intelligence does the heavy lifting. And that can spell trouble.
If youâre skeptical about the amount of work AI now does for social media, this anecdote from former FBI agent Clint Watts should give you pause for thought. Watts and his team were tracking terrorists on Twitter. âThere was one we knew was a terrorist â he was on the most wanted list,â Watts explained during a panel discussion at Mozillaâs Mozfest. âIf you followed him on Twitter, Twitter would recommend other terrorists.â
When Watts and his team highlighted the number of terrorists on the platform to Twitter, the company was evasive. âTheyâd be, âyou donât know that,ââ Watts said. âActually, your algorithm told me theyâre on your platform â thatâs how we figured it out. They know the location and behind the scenes they know youâre communicating with people who look like you and sound like you."
At its heart, this is the problem with all recommendation algorithms for social media: because most of us donât use social media like the FBI, itâs a fairly safe bet that you follow things because you like them, and if you like them it follows that you would also enjoy things that are similar.
Tracking the wrong metrics
This reaches its unfortunate end state with YouTube: a company that measures success largely on the number of videos consumed and the time spent watching. It doesnât really matter what youâre absorbing, just that you are.
YouTubeâs algorithms exploit this mercilessly, and there are coal-mine canaries raising the alarm on this. Guillaume Chaslot is a former YouTube software engineer who founded AlgoTransparency: a bot that follows 1,000 channels on YouTube every day to see how its choices affect the siteâs recommended content. Itâs an imperfect solution, but in the absence of actual transparency from Google, it does a pretty good job of shining a light on how the company is influencing young minds. And itâs not always pretty.
âThe day before the [Pittsburgh] synagogue attack, the video that was most recommended was a David Icke video about George Soros controlling the worldâs money, shared to 40 channels, despite having only 800 views,â Chaslot told an audience on the Mozfest AI panel.
We checked later, and heâs right: hereâs the day on AlgoTransparency, although clicking through now shows that its been watched over 75,000 times. While it would be a pretty big leap to associate a synagogue attack with YouTube pushing a conspiracy theory about a prominent Jewish billionaire â especially a video that appears to have, comparatively speaking, bombed at the time â itâs not a good look for Google.
[IMG alt="eQANrXZQpWKiGNXp5SYQQP" width="690px" height="388px"]https://cdn.mos.cms.futurecdn.net/eQANrXZQpWKiGNXp5SYQQP.jpg[/IMG]
Albotransparency is a bot that attempts to unpick YouTubeâs recommendation algorithm
âIt makes sense from from the algorithmic point of view, but from the society point of view, to have like an algorithm deciding whatâs important or not? It doesnât make any sense,â Chaslot told us in an interview after the panel. Indeed, the algorithm is hugely successful in terms of growth, but as others have reported, it has a tendency to push people to the extremes as this New York Times experiment demonstrates.
It seems as if you are never âhard coreâ enough for YouTubeâs recommendation algorithm. Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons
Zeynep Tufekci
âIt seems as if you are never âhard coreâ enough for YouTubeâs recommendation algorithm,â wrote the author Zeynep Tufekci in the piece. âVideos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.â
Some of us have the willpower to walk away, but an algorithm trained on billions of people has gotten pretty good at keeping others on the hook for one last video. âFor me,YouTube tries to push plane landing videos because they have a history of me watching plane landing videos,â says Chaslot. âI donât want to watch plane landing videos, but when I see one I canât restrain myself from clicking on it,â he laughs.
Provoking division
Exploiting human attention isnât just good for lining the pockets of social media giants and the YouTube stars who seem to have stumbled upon the secret formula of viral success. Itâs also proved a handy tool for terrorists spreading propaganda and nation states looking to sow discord throughout the world. The Russian political adverts exposed in the wake of the Cambridge Analytica scandal were curiously non-partisan in nature, seeking to stir conflict between groups, rather than clearly siding with one party or another.
And just as YouTubeâs algorithm finds divisive extremes get results, so have nation states. âItâs one part human, one part tech,â Watts told TechRadar after the panel discussion was over. âYou have to understand the humans in order to be duping them, you know, if youâre trying to influence them with disinformation or misinformation.â
You have to understand the humans in order to be duping them, you know, if youâre trying to influence them with disinformation or misinformation.
Clint Watts, FBI
Russia has been particularly big on this: its infamous St Petersburg âtroll factoryâ grew from 25 to over 1,000 employees in two years. Does Watts think that nation states have been surprised at just how effective social media has been at pushing political goals?
âI mean, Russia was best at it,â he says. âTheyâve always understood that sort of information warfare and they used it on their own populations. I think it was more successful than they even anticipated.
âLook, it plays to or authoritarians and itâs used either to suppress in repressive regimes or to mess with liberal democracies. So, yeah, I mean cost to benefit its itâs the next extension of of cyberwarfare.â
Exploiting the algorithms
Although the algorithms that explain why posts, tweets and videos sink or swim are kept completely under wraps (Chaslot says that even his fellow YouTube programmers couldnât explain why one video may be exploding), nation states have the time and resources to figure it out in a way that regular users just donât.
âBig state actors â the usual suspects â they know how the algorithms works, so theyâre able to impact it much better than individual YouTubers or people who watch YouTube,â Chaslot says. For that reason, he would like to see YouTube make its algorithm a lot more clear: after all, if nation states are already gaming it effectively, then whatâs the harm in giving regular users a fairer roll of the dice?
A lot of alt-right conspiracy theories get extremely amplified by the algorithm, but they still complain about being censored, so reality doesnât matter to them
Guillaume Chaslot, AlgoTransparency
Itâs not just YouTube, either. Russian and Iranian trouble makers have proved effective at gaming Facebookâs algorithms, according to Chaslot, particularly taking advantage of its preference for pushing posts from smaller groups. âYou had an artificial intelligence that says, âHey, when you have a small group youâre very likely to be interested in what it posts.â So they created these hundreds of thousands of very tiny groups that grew really fast.â
Why have social media companies been reluctant to tackle their algorithmic issues? Firstly, as anybody who has worked for a website will tell you, problems are prioritised according to size, and in pure numbers, these are small fry. As Chaslot explains, if for example 1% of users get radicalized by extreme content, or made to believe conspiracy theories, well, itâs just 1%. Thatâs a position itâs very easy to empathise with â until you remember that 1% of two billion is 20 million.
[IMG alt="5826f749f5866f01c5c428364e67626c" width="690px" height="388px"]https://cdn.mos.cms.futurecdn.net/5826f749f5866f01c5c428364e67626c.jpg[/IMG]
Censorship and oppression can be powerful tools in the hands of propagandists
But more than that, how can you measure mental impact? Video watch time is easy, but how can you tell if a video is influencing somebody for the worse until they act upon it? Even then, how can you prove that it was that video, that post, that tweet that pushed them over the edge? âWhen I talk to some of the Googlers, they were like âsome people having fun watching flat Earth conspiracy theories, they find them hilariousâ, and thatâs true,â says Chaslot. âBut some of them are also in Nigeria where Boko Haram uses a flat Earth conspiracy to go and shoot geography teachers.â
Aside from that, thereâs also the difficulty of how much social media companies intervene. One of the most powerful weapons in the propagandistâs arsenal is to claim that theyâre being censored, and doing so would play directly into their hands.
âWe see alt-right conspiracy theorists saying that they are being decreased on YouTube, which is absolutely not true,â says Chaslot. âYou can see it on AlgoTransparency: a lot of alt-right conspiracy theories get extremely amplified by the algorithm, but they still complain about being censored, so reality doesnât matter to them.â
They can change their terms of service all they want, [but] the manipulators are always going to dance inside whatever the changes are
Clint Watt, FBI
Despite this, the narrative of censorship and oppression has even been picked up by the President of the United States, so how can companies rein in their algorithms in such a way that isnât seen to be disguising a hidden agenda?
âTheyâre in a tough spot,â concedes Watt. âThey canât really screen news without being seen as biased, and their terms of service is really only focused around violence or threats of violence. A lot of this is like mobilising to violence, maybe, but itâs not specifically like âgo attack this personâ. They can change their terms of service all they want, [but] the manipulators are always going to dance inside whatever the changes are.â
This last point is important, and social networks are constantly amending their terms of service to catch out new issues as they arise, but inevitably they canât catch everything. âYou canât flag a video because itâs untrue,â says Chaslot. âI mean they had to make a specific rule in the terms of service saying âyou canât harass survivors of mass shootingsâ. It doesnât make sense. You have to make rules for everything and then take down things.â
Can we fix it?
Despite this, Watts believes that social media companies are beginning to take the various problems seriously. âI think Facebookâs moved a long way in a very short time,â he says, although he believes companies may be reaching the limits of what can be done unilaterally.
âTheyâll hit a point where they canât do much more unless you have governments and intelligence services cooperating with the social media companies saying âwe know this account is not who they say they areâ and youâre having a little bit of that in the US, but itâll have to grow just like we did against terrorism. This is exactly what we did against terrorism.â
From the regulatorsâ perspective, they donât understand tech as well as they understand donuts and tobacco
Clint Watts, FBI
Watts doesnât exactly seem optimistic of regulatorsâ ability to get on top of the problem, though. âFrom the regulatorsâ perspective, they donât understand tech as well as they understand donuts and tobacco,â he says. âWe saw that when Mark Zuckerberg testified to the senate of the United States. There were very few that really understood how to ask him questions.
âThey really donât know what to do to not kill the industry. And certain parties want the industry killed so they can move their audiences to apps, so they can use artificial intelligence to better control the minds of their supporters."
FBI agent Clint Watts says the US Senateâs questioning of Mark Zuckerberg showed how little regulators understand about technology
Not that this is all on government: far from it. âWhat was Facebookâs thing? âMove fast and break things?â And they did, they broke the most important thing: trust. If you move so fast that you break trust, you donât have an industry. Any industry you see take off like a rocket, Iâm always waiting to see it come down like a rocket too.â
There is one positive to take from this article though, and itâs that the current tech and governmental elite are being replaced by younger generations that seem more aware of internet pitfalls. As Watts says, young people are better at spotting fake information than their parents, and they give privacy a far higher priority than those of us taken in by the early social movers and shakers.
âAnecdotally, I mostly talk to old people in the US and I give them briefings,â says Watts. âTheir immediate reaction is âweâve got to tell our kids about this.â I say: âno, no â your kids have to tell you about this.ââ
[ul]
[li]How to delete your Facebook account[/li][/ul]
Continue readingâŚ