Analysis Example: Lots of Data, Not Enough Questions
Looking back at this piece, I could've definitely used my time to polish up my argument, which I doubt would be clear to everyone coming into this analysis piece. Perhaps I would've dove more into the details of the more recent publications from the Meta project mentioned below as well, and added another counter example to show how large amounts of data often misses looking at the individual (and why that's important in my eyes).
Lot's of Data, Not Enough Questions
Rotem Landesman
I remember when the news about the joint research Meta conducted with various universities around the country came out, and the excitement that people expressed over the sheer magnitude of data used in the study, and the benevolence Meta seemed to grant researchers surrounding this data. The studies were generally framed as “describing the role of social media in American democracy” - and focused primarily on how critical aspects of the algorithms that determine what people see in their feeds affect what people see and believe. There were four initial peer reviewed publications that came out of the studies done during the 2020 election cycle, each of them concerning a question chosen by the researchers with the “explicit agreement that the only reasons Meta could reject such designs would be for legal, privacy, or logistical (i.e., infeasibility) reasons”.
The issue with these studies is not, in my mind, the data they collected, which as far as the documentation goes seemed to have been collected with consent from platform users, or even the fact that its results are synthesized into dense academic papers few are likely to read and a few clickbait articles many are likely to misinterpret. The real problem, in my eyes, lies in the questions we were asking of the data itself. As we think about data and the science we ought to conduct in an age where algorithmic abilities allow us to examine human phenomena quantified and at large, it seems like we’re letting years of post-positivist, hard truth and numbers oriented thinking guide us and our lines of questioning. In simpler terms, I don’t think asking the question: do algorithms affect the way we experience social media and what they believe is the right question to ask in our day and age.
Now please don’t get me wrong - yes of course, we should ask whether these social media platforms change the way we view the world so that we can begin to foster a more inclusive and empathetic society, one that champions pluralistic debates and civil discourse in a joint effort to further everyone’s quality of life around the world. But digging into the research, after having asked that question of do algorithms affect what we see, seems… moot? The results show, and don’t be too shocked - that algorithms have an echo chamber effect. In essence, “the segregation in our information diets starts with who we follow”, and that what we see in our feed “amplifies the ideological leanings of our social networks”. Let me be the first to say: duh. We’re social animals, attuned to gather like minded individuals around us to boost our ego and make us feel pretty and smart and amazing - why would we change any of that when we’re online?
The more interesting questions lie in changing that norm, acknowledging that we live in an incredibly interconnected world and that our human instincts must evolve alongside it. If social media companies are boasting about being the tool to interconnectivity, how about using your immense amount of data to gain some insight into how it can make us better as humans? That, in my eyes, would be a more noble question to ask of the data, shaping the results to perhaps action items that would further our society rather than shed light on what we already intuitively know and Meta kindly allows us to back up with data.
Independent researchers are already onto this task, seeing that dialogue around fact checking changes when users are paired with others who have different views from themselves, and the handling of misinformation online differs between political camps. I would love to use some of the insights these research projects using Meta’s data capabilities to answer questions like: how do we design for better online dialogue between users, to create a fertile environment for political debate and a safe space for people to change their minds? How can we make sure that while we present ourselves and live a significant portion of our lives on these platforms, they try their best to not harm our mental well being, or perhaps even better our mental constitution? Even a simple spin off study using the insight that feeds organized chronologically instead of “personally” make users spend less time scrolling through the platform, which was thrown into one of the research papers and not given much through beyond mention, would’ve been an interesting path to follow and see how we can reduce time on screens and more time with each other. But that last one might be a reach for a company like Meta to ask. It would've been fascinating to know individuals’ experiences on these platforms during election season, and whether the personal, experiential perspective could amend these results to a more contextual view of the world around us. After all, the limitations of data science is that it's just that; data. When we research people, who are usually best represented beyond the numbers associated with them, there is a facet of the story, or rather a slew of confusing and un-statistically comfortable data we agree to lose by focusing on the numerical, just as this research project that was so highly regarded chose to do.
At the end of the day, science is really a story we choose to tell ourselves about the world, understood through a lens we choose at the given moment of inquiry. The data we collect, no matter how big or small, is only as good as the questions we ask of it and are willing to hear the answers of. In this case of the Meta research into the doings of their algorithms during the 2020 election season, I believe the moment was right to ask more hard hitting questions than” does the algorithm change our worldview”. But, on an optimistic end note, perhaps it is a start.
Sources:
https://about.fb.com/news/2023/07/research-social-media-impact-elections
https://medium.com/@2020_election_research_project/first-four-papers-from-us-2020-facebook-instagram-research-election-study-published-in-science-c099c235fc6c
https://www.wired.com/story/meta-social-media-polarization
https://www.science.org/doi/10.1126/science.abp9364
https://theconversation.com/people-dig-deeper-to-fact-check-social-media-posts-when-paired-with-someone-who-doesnt-share-their-perspective-new-research-216881
https://theconversation.com/its-not-just-about-facts-democrats-and-republicans-have-sharply-different-attitudes-about-removing-misinformation-from-social-media-216809