So, news today that Facebook has recanted from its original view that Fake News did not influence the US election - CNN
"The bottom line is: we take misinformation seriously," wrote Zuckerberg. "We take this responsibility seriously. We've made significant progress, but there is more work to be done."
The CEO said that Facebook (FB, Tech30) is working to develop stronger fake news detection, a warning system, easier reporting and technical ways to classify misinformation. Facebook has also been in contact with fact checking organizations.
For Zuckerberg, it's a sharp reversal in tone from comments made in the immediate aftermath of the election.
"I think the idea that fake news on Facebook -- of which it's a small amount of content -- influenced the election in any way is a pretty crazy idea," he said last week.
So why the sharp reversal of views from a typically very canny guy?. Well there are two options:
One is that Facebook, in a Road to Damascus moment, has decided that is has a public duty to start to manage the Fake News Problem. (OK, OK, just joking - this is Facebook after all). But this effort (even the announcement of it) in a stroke gets rid of all those annoying Do Gooders and their Bad Publicity, and reduces too sharp a focus on the ongoing problems of scalabe content management problems
it is having.
The second is that it reinforces the impression that Facebook does influence people big time, a necessary story to keep on selling Advertising. The most difficult part of saying that Fake News didn't influence anyone is admitting that Ads on Facebook don't either.
Far better to claim major Fake News Influence, kick a few Alt-Right feeds off (for now), and while the sun of public approval shines make hay with the tacit impression that Ads work, big time.
At one stroke, do well by doing good. Genius....
One wonders what MZ was thinking initially. Oh, what's that? Telling the Truth? Now now, we'll have none of that Fake News here....
Spent some of the weekend reading various analyses of why polling for the US election predicted the wrong candidate. The overall point being made, time and again, was that, in very close races, there is a small difference between the candidates and in this case (and Brexit's case) the margin fell the "wrong" way and the unpredicted side won out.
To which the only real response is something on the lines of "well, they would say that" (aka "bollocks")
Because if, as they are now claiming, everything was so close and within their margins of error, and had been close for a while before the day, then you would have expected one or both of the following effects:
- Quite a few of the polls predicting Trump (or Brexit), not all for Clinton (or Remain)
- Some flip-flopping as sample data went first one way and then the other
Instead there was a relentless "Clinton (Remain) is winning" story and we went into the final days with a Clinton win being predicted with between 70 - 99% probability from every major polling outfit. How the $^% does that square with the claim now being made that "everything was close and within the margin of error". At the very least - the absolute very least - they should have been giving higher odds of winning to Trump (Leave) then!.
Contrast to what we were seeing
in our tracking of the memes on social media, which showed huge support for Trump, with Clinton only really closing the gap in the last month or so. (And that's just social media, many more conservative people tend to be in the demographics that doesn't use it that heavily)
We suspect something else was happening.
There was a rather interesting report after the Brexit poll fiasco, which said that in essence the polling companies saw Remain winning because they wanted to see Remain winning - that there was confirmation bias in a number of ways. Nate Silver said something similar after failing to call Trump in the Republican party candidate elections. At the end of the day all this stuff is open to interpretation, and it seems to us the most plausible explanation is that there was a strong tendency to bias to Clinton in nearly every case where an interpretation option came up. Add to that that everyone else is saying the same thing, and it becomes very hard to remain unbiassed. You have to set some ground rules.
Before we started tracking the US Election, we had been tracking Brexit using a system dynamic prediction model (see here
) and as a result of the Brexit analysis we set up a number of rules for tracking the US election. Rule (ii) is very apt here:
(ii) Beware Hubris - assume the gap is less than you think, especially if you believe you [ie your preferred option] have the "moral" advantage
Given that most of the people who do this sort of work are highly likely to be in a pro Clinton (Remain) demographic it may in fact be more than conformation bias, it may even be a complete inability to realise that there Is another interpretation/option. I note with interest that Ogilvy's head of their PR business in the UK recently suggested that his staff
get out of London or risk being out of touch.
(Of course, if you follow the Conspiracy theory route, there is a better one - that Clinton's direct influence
pwned the media and polling companies)
(Tracking the US Election memes - Trump is the big blob way ahead on the right)
On Saturday November 5th we decided to "go public" with our prediction that Donald Trump was in pole position to win the US Presidential Election. We were in a small minority, most polls and researchers were calling it for Clinton with odds ranging from high 70's to near 100%, but we trusted what we were seeing on our Dataswarm analytic engine
. The system had worked for Brexit, after all (though we were too uncertain to publicise it then, preferring no egg on face to instant fame). So, suitably caveated, we posted it up
. Carpe Diem, and all that.
However, it seemed that the minute we posted it, all the good news for Trump started going bad, which from our self-interested point of view was also bad news. First the FBI investigation into Clinton's emails was cancelled, and we were told her support was rising. Then news came through that she was storming ahead in early voting in a number of key states. If she won we would have egg on our faces, if we retracted the prediction we would too.
(We make no comment about anyone's views about the political outcome, this is all about how the technology worked)
Waking up in time for the 6 am BBC morning news in the UK (5 hours ahead of the US) we heard that Trump had almost won, with a far wider margin than our system had shown was possible. By 8 o'clock most pundits had called it. Trump was the next President -elect.
Our system had got it right - it had worked.
So why had we got it right when nearly all the other polls and pundits had called it wrong? Now we have had a day or so to look at the outcomes, we think there are 4 main reasons.
Firstly, Internet vs human polling. Our system is looking at verbatim Social Media data, from Twitter. We had come to the conclusion while monitoring previous UK general elections that people were more willing to share their true thoughts on social media than with pollsters, especially if their views were "non-PC" (in this case, pro-Trump). After the election we read that the LA Times poll, which had consistently been more pro Trump (and been roundly criticised by nearly every pundit), had been an internet poll
, not using people to ask questions, and they believed (and were proved right) that people had been more honest on that. In effect by monitoring social media, we were getting the same sort of uncensored opinions, and in that uncensored world Trump was doing a lot better than the standard polls were predicting. Also, we knew from UK elections about the "shy Tory" effect where people say one thing - typically to look good (virtue signalling as it is called) - in public, and do another at the ballot box (To misquote Phil Ochs
, Liberals are 10% left of centre in public, 10% right of centre at the ballot box).
(Update - there has been a rush of Social Media monitoring companies saying they saw the same thing as us (here
for example), though they seemed a little bit more reticent than us about calling it before the event
Secondly, the way our system works helped quite a bit. It was initially designed to satisfy a BBC requirement to "Find the Zeitgeist" across its media output, as well as compare it to others' output. To solve this we used a fairly obscure technique that we had become interested in called memetic analysis, that groups memes into groups of fellow travellers (called "memeplexes" in the lingo) rather than look at things one by one, as Boolean analysis forces one to do. We started it going the day after Trump became REpublicam Party candidadte, and by Nov 8th it had crunched a frelevant sample of c 170m tweets and was tracking 4.5m memes. What our system was showing was that from the get go, Trump had dominated the memespace (as he had in the primaries too). In meme theory as originally proposed by Richard Dawkins (who coined the term meme - or cultural gene), the view is that memes colonise your mindspace - so in effect the Trump memeplex was hogging the electorates' mindspace, starving out competing memes. Clinton was not anywhere near. To be sure, not all Trump memes were positive, but in essence Trump was using a "Wildean strategy" (The only thing worse than being talked about is not being talked about).
You can see how this works in the Youtube video of the system tracking Trump, above
Thirdly, we knew from Brexit that the "non voters" were very motivated to come out to vote if someone could credibly promise an "out" from the current political system. Trump seemed to be doing that succesfully. (One can argue about the morality of his tactics, but the effectiveness of the strategy had been proven for Brexit), and we thought it was happening again.
Lastly, Confirmation bias. We knew that after Brexit and the Republican Primaries, UK and US pollsters had looked at why they had got it wrong. They had realsed that their unwillingness to countenance a Leave/Trump victory had made them look at the data from a point of view of what they wanted to see, not what the data said. Given that the US media and pollsters seemed to be even more pro Clinton that the UK equivalents were pro Remain, we suspected there was a Clinton bias in the polling uncounted supporters -
That is why, with Trump just ahead, we thought he would probably win.
For what its worth, we had thought if we were wrong it would be because the vote split would go Clinton's way in marginal districts, due to the reputed strength of the Democrat "ground game", beating the above factors. But as with Brexit, the Trump voters proved more motivated and got out and voted.*
(We did note, in our weasily caveats on the 5th, that Clinton could win the Electoral college, but we thought Trump would have the popular vote. Ironically, it turned out the other way).
*Update - it seems Clinton got more of the popular vote than initially reported on the day (though less than most of the polls thought), this is interesting as we thought she'd get about the same or less. We were still more pessimistic than the pollsters about her chances and that made the difference in the call I guess - though as we point out here
even by their own now revealed calculations, the 70%+ chances the pollsters were giving here were not really justified.