Announcing the Finalists of the Prosocial Ranking Challenge
Injecting credible news, bridging-based ranking, downranking toxicity, adding diversity, and more
We received 21 first round entries, and from those our judges have selected eight finalists!
Thank you all who submitted. To those who were not finalists: we know it was a lot of work, and even though we are not going forward with your idea, your participation is what makes this kind of crowdsourced science possible.
The Finalists
These eight teams now have a month to make their algorithm production-ready, in preparation for our experiment starting in June 1. In alphabetical order by team leader:
1. Muhammed Haroon, Anshuman Chhabra, Magdalena E Wojcieszak
Our algorithm boosts news posts from credible and ideologically diverse news sources, which are tailored to a specific user's interests to increase political knowledge and make users more resilient to democratic threats.
2. Jana Lasser
The algorithm surfaces informative and friendly content while suppressing toxic and polarizing content or untrustworthy news items. If no high quality content is available, it displays a warning, discouraging users from scrolling further.
3. Killian McLoughlin, Steve Rathje, Laura Globig, Sarah Mughal, Jay Van Bavel
Our algorithm downranks divisive social media content: posts with expressions of moral outrage or negative sentiment, a low ‘likes per retweets’ ratio, andlinks to low-quality news sites. Posts linking to the lowest quality sites are replaced.
4. Alex Moehring
This credibility-based ranking algorithm will down rank content that contains links to news articles from low-credibility sources in order to understand how low-credibility news content impacts engagement on social media. We rely on established measures of publisher credibility that captures an aggregate summary of the journalistic quality of publishers summarized in Lin et al (2023).
5. Emil Schmitz, Aaron Snoswell, William He, Jean Burgess, Damiano Spina
We (i) re-rank posts to highlight diverse political opinions, and (ii) inject public posts with high bridging-ness. The diversification is driven by LLMs simulating politically diverse personas, and bridging potential gauged using historical engagement metrics.
6. Martin Saveski, Luke Thorburn, Soham De, Kayla Duskin, Hongfan Lu, Aviv Ovadya, Smitha Milli
The algorithm detects civic content that doesn’t attract diverse engagement (not ‘bridging’) and replaces it with similar content that is expected to be more bridging, hopefully reducing divisiveness without decreasing civic content.
7. Siddarth Srinivasan, Manuel Wüthrich, Max Spohn, Sara Fish, Peter Mason, Safwan Hossain
Our algorithm recommends content based on ideological alignment of source and user, i.e., content that users likely disagree with from sources they usually agree with (and vice-versa), to study the impact on users’ perceptions of out-groups and inclination to consider other perspectives.
8. Bao Tran Truong, Saumya Bhadani, Ozgur Can Seckin, Do Won Kim
Our algorithm aims to engage a wide range of political perspectives while fostering civil discussions. It demotes posts that might attract toxic/offensive responses and prioritizes those that elicit positive emotions, while resonating with an ideologically diverse audience.
What’s next?
Things will happen quite quickly now. Each team must submit a production ready ranker, which meets our architecture and performance requirements, by June 1st. All those who meet this deadline will share in the $50k prize pool, however we can only fund five for testing. Hence there will be another round of judging to select the winners, which will be announced on June 15 — the same day that the experiment proper starts running.
Interesting! Is it possible to integrate any of these algorithms (at the present time or in the future) into existing social media platforms/apps like Meta's Insta/Facebook or reddit?