The next time you log into Facebook, refresh your Twitter feed or search Google News, have a closer look at what appears: chances are you will see content from the same few people or the same publisher day in, day out. That’s because online algorithms are determining what we see, and that’s trapping us in an echo chamber of like-minded opinion and news.

A lot has been written about this issue, and the debate was reignited recently with the launch of Google’s personalised news feed, which presents articles based on a user’s search history and topics of interest. But less thought has been given to who’s really responsible for this information bubble we find ourselves in. Do our own views influence the algorithms… or are the algorithms influencing our views?

Facebook CEO Mark Zuckerberg says that “people are actually exposed to more diverse content on Facebook and social media than on traditional media like newspapers and TV.” To a point, this is true. And in an era when we are time-poor but content-rich, social media’s algorithms can help us cope with the vast amount of information that bombard us on a daily basis.

But there is a side effect: with platforms controlling the flow of information, we are exposed only to ideas we previously indicated we agree with. It is simple for us to unfollow people on Twitter or Facebook when we see something we don’t like, making it easier and easier to ignore views different to our own. This filtered reality not only influences our opinions but also, as we’ve seen in the U.S. presidential election and Brexit,

With bots spewing out a vast number of posts, the sheer volume of content on social media feeds is making it harder for people and platforms to distinguish fact from fiction. When the Internet was launched it was expected to be an open, democratic source of information. Some say we should be in favour of algorithms that filter out content for which we neither have appetite or time. But given the proliferation and influence of fake news, others argue that social platforms should offer their users more variety or at least the option to opt out of their algorithms.

So whose fault is it really?

There’s no simple answer. Social networks certainly bear a responsibility to help users break out of the filter bubble. Educating users through initiatives like Facebook’s Journalism Project goes some way to combatting the problem. Of course, this does rely on users taking an active interest in the quality of the information they are consuming.

Facebook, which has admitted it could have done more in the past to combat fake news, has said it will block individuals and organizations that share fake news from advertising on the platform altogether. Google has taken similar steps by launching a ‘Fact Check’ tag which identifies articles that include information that has been checked for accuracy by news publishers and fact-checking organisations. But all platforms should also look at their algorithms to better understand people’s news consumption and habits and adapt algorithms accordingly, which will require substantial investment in resourcing and infrastructure.

Users also bear some responsibility. Due to a heightened awareness of the issue of fake news, we have all become more sceptical about what is served to us online. One way we can curate more diverse newsfeeds is by expanding the breadth of individuals and organizations we follow to include ones we may actively disagree with or engaging with content with a viewpoint different from our own. This will ‘trick’ algorithms into thinking this is content we may like or agree with, thereby expanding the range of content that appears in our feeds.

It is also up to users to go beyond the headlines and look for other sources that confirm the story they have read. Because ultimately, if we cannot take the time to curate our own news feeds or we refuse to challenge ourselves and look for competing opinions, then why should we expect social platforms to do it for us?

Emma Paterson is a Digital Associate in Finsbury’s London office.