Loading

Algorithmic Ethnography Investigating digital communities through user experience

What is an algorithm?

The fastest way to answer this question is to Google it. So let's do that.

Our first result comes from Oxford Languages, which gives a simple, computational definition for algorithms. Under this definition, algorithms are often described as recipes or instructions: they tell the computer what to do with inputs, such as numbers to be divided or costs to be summed.

But algorithms are more than just mathematical models. How else do we talk about them? How else does Google talk about them?

Google gives us flowcharts: algorithms define sequences, processes, and, importantly, results. It also gives us books: algorithms are academic, mathematical, and complex.

The suggested videos come from educational sources. Algorithms are tools, tools apparently worth caring about. But we also see a music video: "Algorithm." Something cultural.

And algorithms are making headlines. This week, we get tech headlines: "Deep-Grid MAP-Elites." Last week, we got different headlines.

When I last searched "algorithm," the conversation was focused on...

racial bias and sexism...

higher education...

and free speech.

In the news clip that follows from the House Judiciary Committee's anti-trust hearing on July 29th, 2020, a US representative questions Google CEO Sundar Pichai about whether Google search results function "manually" or "algorithmically." What the representative fails to realize is that algorithms are not by default unbiased, and in fact require a great deal of tweaking to get "neutral" results. The fallacy of indifferent technology has convinced him that there is no manual intervention, when in fact all algorithms are the product of particular manual intervention.

What was once a computational model, something to be boiled down into flowcharts, has become a source of a cultural dialogue spanning music, education, and electioneering. Clearly, algorithms are more than just instructions for numbers; they are recipes for how we experience the Internet and the world. One last example: searching "algorithm" yielded 367,000,000 results in 0.57 seconds. An algorithm determined which results we saw first (done in "Incognito Mode," these results are not affected by my search history). Based on these Google trends, algorithms will only become more important in our conversations about artificial intelligence, tech-dependency, and the dangers of the Internet.

What is algorithmic ethnography?

As we've seen, much of the internet is driven by algorithms, which, while a computational concept, have taken on a larger cultural meaning. We speak of algorithms as the agents determining our newsfeeds, social media timelines, and advertisements. Algorithms tend to be discussed as if they have a mind of their own, when in reality they are meticulously programmed and manipulated by their creators. Journalistic and anthropological work has been done to investigate the producers of algorithms, but algorithmic ethnography is more interested in how users experience the virtual environments created by algorithms. Algorithmic ethnography is the study of immersive, algorithmically-driven online experiences.

Practically speaking, algorithmic ethnography means spending a lot of time on one's phone or computer. It means watching YouTube videos, Googling research topics on a separate Google account, and trawling through social media on curated profiles. It means engaging with the algorithms that determine online experiences. Algorithmic ethnography does not seek to understand or reverse-engineer the algorithms themselves: this is not the task of the ethnographer. Instead, algorithmic ethnography seeks to simulate the procedural effects of algorithmic interaction, understanding how user-specific digital experiences are created interactionally rather than mathematically. The algorithms may be black boxes whose mechanics are not available to consumers, or the logic of algorithms might be clearly delineated in theory but have unanticipated outcomes in practice. A basic understanding of an app or website's algorithm is sufficient to make conclusions about how that algorithm is experienced by its users. In the most direct terms: algorithms determine the output of software based on user inputs; algorthmic ethnography is selective about inputs in order to understand the types of output that users will receive. The digital world is not spontaneous or unbiased; instead, it is curated, often by opaque systems, so algorithmic ethnography attempts to understand curators in terms of their effects, not their causes.

What differentiates algorithmic ethnography from cyber-ethnography?

In practice, algorithmic ethnography is a form of cyber-ethnography, and it involves hanging out in online spaces and performing all of the regular tasks that a cyber-ethnographer would. However, it differentiates itself by (1) focusing on algorithms as the curators of digital content (2) across multiple platforms (3) all embedded within one device. Whereas a cyber-ethnographic field site might be a video game on someone's personal computer, an algorithmic-ethnographic field site might be an array of apps, forums, and websites on someone's research cell phone, in which the entire interface (from Google search suggestions to news notifications) becomes the material for ethnographic research.

Algorithmic Ethnography in Practice

The steps, tips, and warnings about conducting algorithmic ethnography

Step 1: Determine if your research subject is suitable for algorithmic ethnography

Especially in light of the COVID-19 pandemic, many communities have moved online. This makes cyber-ethnography more appealing, but it might not always be apt. Some communities, such as sports fans, might have limited online presences or online presences that pale in comparison to their real-world activity. Even more digitally-based communities, such as ones for specific video games, might not be very present online. Listen to your research subjects in order to determine if the digital experience is an important one. Die-hard sports fans might casually browse certain forums or follow certain Twitter accounts, and so while cyber-ethnography might be suitable, algorithmic ethnography might not be.

In practice, algorithmic ethnography requires a community with a large, multi-platform online presence.

Step 2: Set up a "clean" device for research only

Algorithms are driven by user input. Using a separate device with separate social media profiles allows researchers to be more selective about their inputs. This is to say, your regular Googling could affect the results of your investigation in unintended ways, so a separate device allows for the effects of particular inputs to be distilled. This also helps to separate research from regular, day-to-day browsing, which, in cyber-ethnography, can be a difficult task when your field site is the Internet.

Step 3: Gather inputs (and input) from your research subjects

It would be unwise for researchers to enter into algorithmic ethnography without consulting their research subjects. Researchers can discuss popular Twitter accounts, social media pages, news outlets, and trends within the community in order to get some things to search, follow, and subscribe to on their device. This combats assumptions that researchers might have about their subjects' interests and online activity.

Step 4: Browse and browse frequently

Algorithms are determined not just by input but by frequency of input and by timing of input. News stories are suggested based on current events, videos are suggested based on trending topics, and social media posts are suggested based on levels of interaction. To not engage frequently is to miss the coherent trends that are emerging and to be subject to only a limited number of posts. Algorithms are time-dependent and interaction-dependent, so frequent use helps to generate more data and contribute to the algorithms' evolution over time. That said, regular field notes should be taken, as with any ethnography.

Step 5: Ask for evidence of your subjects' algorithms

You should reach out to your research subjects when possible to ask them for evidence of what their algorithmic experiences are like, especially if a significant event has occurred in their community. Ask for screenshots of their YouTube recommended, or their top 20 latest tweets, or the top 5 news articles that Google suggests to them. Such evidence can give the researcher insight into how algorithms determine their subjects' experiences of current events, how closely aligned their experiences are with the researcher's, and how diverse the algorithmic experience might be for some communities.

Step 6: Archive and archive frequently

The Internet is fleeting. Viral tweets, tiktoks, or Facebook posts can disappear or become impossible to find. Luckily, most apps allow for some form of automatic archiving such as "liking," "bookmarking," or "favoriting," but even these methods are dependent upon the information remaining available online. Through a variety of methods, researchers can archive their data, whether that means screenshotting posts, webscraping text from websites, or downloading videos. So much online content is ephemeral and should be archived regularly.

Step 7: Download your digital presence

At the end of your study, many sites and apps allow users to download archives or their history. This is exceptionally useful when it comes to ethnography. Researchers can have a log of every term searched, every tweet liked, and every YouTube video watched. Understanding the final form of one's YouTube feed in light of the totality and trajectory of their YouTube history can provide insight into how algorithms compound inputs to generate a relationship with their user, drawing from the past to predict or anticipate what users might want.

Step 8: Analyze your data

Turn off the phone and dig into the data. Of course, normal ethnographic analysis should be performed but remember: algorithmic ethnography is a (1) tool for getting involved in communities, (2) way of experiencing communities, and (3) way of understanding technology. The analysis must simultaneously seek to explain the community in its own context and explain how the virtual environment mediates or makes the community. There must be a reflection on the algorithms themselves as well as the results of (or content provided by) the algorithms.

Advantages of Algorithmic Ethnography

  • From a practical standpoint, algorithmic ethnography allows cyber-ethnographers to compartmentalize their research. Because the digital world is so entangled in our daily lives, it can be troubling to conduct research on the same devices that we use for our personal lives. Due to a focus on a holistic virtual environment, algorithmic ethnography demands that researchers separate their personal browsing from their investigations, which is healthier overall.
  • From a theoretical standpoint, algorithmic ethnography allows cyber-ethnographers to distill the effects of particular inputs on particular outputs. Again, ethnographers won't understand the full mechanics of algorithms, but they will understand their effects.
  • Through the use of built-in features such as "favorites" or "likes," algorithmic ethnography is auto-archival: that which produces new research data also constitutes research data.
  • Algorithmic ethnography is infinitely repeatable, so with more devices and accounts, researchers can create laboratories for different types of algorithmic experiences, investigating if different factors such as age, gender, race, or location affect the outputs of algorithms, in addition to experimenting with different inputs.
  • Algorithmic ethnography is necessarily immersive in that ethnographers have to frequently engage with material. If studying a community on particular platforms, for example, the frequent use of multiple different platforms will force ethnographers to be considerate of how information flows throughout the community, and it will equip ethnographers with the necessary background for understanding new posts (for example, some posts on Twitter might be illegible without having seen drama that occurred on Reddit).
  • Algorithms bring new research material to researchers. Rather than search for news stories, tweets, or memes, ethnographers can be given them by the algorithm, just as a community member would receive material.
  • Algorithmic ethnography acts as a counter to big data understanding of user experiences. Done in combination with other analyses (for example, sentiment analysis of tweets or Reddit posts), algorithmic ethnographers dismiss broad data for particular data, contextualizing the mass information with what that information looks like in the space of a personal feed or timeline.

Warnings about Algorithmic Ethnography

  • As with any field site, the space of the phone is colored by the ethnographer's background and preferences, preferences that may become more pronounced as one makes more choices. Ultimately, algorithmic ethnography is the study of choices being made, so researchers may choose to watch one YouTube video over another, and that choice can have ramifications for their YouTube experience, even though some informants might have chosen the other video if given the chance. This may require ethnographers to make choices that go against their best impulses: for example, while the ethnographer might be interested in the pro-Biden tweets in their timeline, if they're studying an alt-right community, it doesn't make sense to like those tweets.
  • Algorithmic ethnography is a simulation in that it fakes being a user of certain apps within a particular community, and this can verge on being appropriative. As hinted at in the above bullet point, we need to embody particular personas, but we can't do that without being reductive: if we are not a member of a community, we cannot assume that what we're doing online is precisely what any given community member is doing, though we can use our best judgement (as in the Biden example). This means that when we write about our decisions, we must write about them while recognizing our immense control over the types of information we are receiving, as well as how those decisions were determined by our own assumptions and positions.
  • Algorithmic ethnography is reductive. Members of online communities will use their devices as I might use my device: haphazardly and with little obvious coherence. While we want to be selective about our inputs as ethnographers, this is not how many people use their devices. We don't want our algorithms to be influenced by a multitude of irrelevant inputs, and thus we must recognize that our inputs are selective rather than entirely authentic and holistic. It might be interesting to see what inputs an algorithm prioritizes, but such analysis can veer into armchair computer science.
  • Thus, algorithmic ethnography is focused on specific processes that produce emergent experiences, and the results of algorithmic ethnography should not seek any form of generalization. The results of this ethnography are rooted in the specifics, not the general. Process must be a main analytical object.
Case Study: QAnon

The work presented here is part of an ongoing BA thesis project investigating the conspiracy theory QAnon. This project is intended to demonstrate some of the challenges, opportunities, and strategies available to ethnographers as they investigate online communities and online media. To comply with IRB protocol, as well as to avoid making preliminary claims about ongoing and temperamental research, I will focus primarily on the methods at work, rather than conclusions one can make about QAnon. The group is acting primarily as a stand-in for any Internet-based community.

What is QAnon?

QAnon is a far-right conspiracy theory that originated on 4Chan. In its nearly three-year lifespan, it has migrated across different platforms and gained a considerable following on many parts of the Internet, attracting enough attention (and concern) to be banned from some social media sites, with some members labelled domestic terrorists. For a brief introduction, you can listen to Joe M., a prominent content creator in the community (re-uploaded by Angel Wallace in order to prevent YouTube takedowns).

Step 1: Determine if your research subject is suitable for algorithmic ethnography

QAnon was founded on the Internet, and while its members have meetups offline, there are tens of thousands of members online across several different social media platforms. Using the Internet to research parts of the conspiracy, reading specific news articles, and even the commercial industry that sells Q products all contribute to QAnon being viable for algorithmic ethnography. In this case, the act of using a phone is important to QAnons, and some describe spending upwards of eight hours a day on their phone. Everything considered, QAnon seems perfect for full algorithmic ethnography.

Step 2: Set up a "clean" device for research only

A while back I had purchased a cheap smartphone for travel (around $50), so I started a new Google account, and I set up the account on that phone. On my home screen, I added several apps and set up profiles on each. To keep things honest, I identified myself as a researcher in all accounts, and I didn't lie about my gender, race, or age.

Step 3: Gather inputs (and input) from your research subjects

Through my existing contacts in QAnon, I knew some Twitter accounts and YouTube channels to follow. From there, I let the algorithm do its work. I followed suggested Twitter accounts, and I watched YouTube channels that appeared in the sidebar. I simply followed my initial informants, and asked them who else I should follow. To act as a newbie, I also searched "QAnon" and followed the most popular accounts.

Step 4: Browse and browse frequently

While this is ongoing research, by this point in time I have listened to or watched over 100 hours of QAnon content, and I have followed over 800 QAnon accounts. I browse Twitter regularly, but mostly like the tweets I find important for research, so I've liked about 600.

Step 5: Ask for evidence of your subjects' algorithms

I haven't performed this task yet. I have anecdotally asked "what accounts do you check after a big news story?" but gathering screenshots and more personal information is off the table until the end of the research, since QAnons are hard to build rapport with. However, some post videos of their Twitter timelines, so I can simply watch those. On the right are examples of my YouTube and Instagram feeds, viable ways of asking QAnons for evidence.

Step 6: Archive and archive frequently

For the past 3 months, I have been downloading all QAnon tweets I could find using R. QAnon was recently banned from Twitter. While the ban is hard to enforce, I noticed a nearly 20x reduction in tweets, demonstrating the necessity of frequent archival.

Step 7: Download your digital presence

On the right is an example of some of the data that researchers can receive by downloading their Facebook account. This can also be done for Google, YouTube, Twitter, and Instagram. I can review my search history, messages, and more.

Step 8: Analyze

One of the first analyses we can perform is a simple judgment on how effective our algorithmic immersion has been. That is to say: was I receiving content curated to the interests of my field site? The following images demonstrate the various levels of success in my investigation. On the left, Twitter interests data shows that the platform has identified me as a political person (with interests such as "William Barr," "Donald Trump," "llhan Omar," "James Clapper," "Jason Kenney," and "James Comey," all figures in the deep state narrative of QAnon). Twitter also recognizes my conspiratorial interests, including subjects such as "Wayfair," "Jeffrey Epstein," "White Rabbit Project," and "Jack Dorsey," which are all related to various conspiracies about child trafficking and information warfare. Twitter's "interests" confirm that the platform recognizes my interests. The next screen comes from my most recently liked TikTok videos: I liked the first nine to appear in my feed (so the first nine, algorithmically-generated videos), and all nine are related to Trump, child trafficking, and the spiritual part of QAnon. My Facebook notifications come from QAnon groups, my Instagram images are pro-Trump and anti-Hollywood elite, and my YouTube recommendations (bottom) range from Fox News to sports videos. On the right, my Google newsfeed is a little less curated, with popular, mainstream news sources on national issues and local, location-based topics. So while some platforms (TikTok, Facebook, and Instagram) algorithmically deliver solely QAnon content, others (Twitter and YouTube) deliver a mix of general topics as well, while Google has remained mostly unchanged by my conspiratorial searches, relying on national and geographic trends over personal taste.

From left to right: Twitter interests, TikTok likes, Facebook notifications, Instagram "discover" feed, and Google newsfeed. Bottom: YouTube recommended topics.

As a final illustration of this principle, we can look at the accounts I am recommended to follow. In the first image, I am recommended three accounts to follow based on the account I am viewing ("JFKjrdaughter"). They all have QAnon descriptors in their bios. The middle screen shows who I should follow based on who I currently follow and what tweets I like, most of whom seem related to QAnon. The final screen shows how generalized "interests" can skew the algorithm, suggesting popular, non-QAnon figures like Joe Biden and Queen Naija, but generally it recommends more conservative, right-wing figures to me.

These various screenshots demonstrate my immersion into my community of choice. Based on who I followed on Twitter and Instagram, who I subscribed to and watch on YouTube, which videos I liked on TikTok, and which groups I joined on Facebook, I will be delivered content that aligns with the platform's knowledge of me. This is important: if I am recommended a few QAnon videos based on their link to right-wing media (which I was after watching Thomas Sowell and Fox News), then I can enter into a relationship with YouTube that exposes me to more conspiratorial videos. The same goes for Twitter and TikTok: if I like and follow a few accounts, I'll be recommended more. Many of these recommendations are couched with personal appeals—"Based on your activity" or "Because you follow [x person]"—which suggests that it is not only logical but will be rewarding if you consume the recommended content. In these ways, bubbles are constructed.

Here we have an example of one such social media bubble. In a poll of nearly 17,000 people, a popular Twitter account asks his followers about their political allegiances. Considering that Trump lost the popular vote in 2016, it is surprising that nearly 78% of his followers said they voted for Trump, and only 2% said that they voted for Hillary; unless, of course, we remember that this user tweets within an algorithmically-constructed bubble limited to QAnon and Trump supporters.

QAnon is actually highly aware of how the algorithm works. They frequently tweet about how Twitter is suppressing their content through predatory censorship practices, and they'll try to overpower this censorship with mass spamming of hashtags. They regularly use a litany of hashtags in order to be easily identifiable. QAnon also will regularly encourage "follow trains" in which users add their usernames to a Twitter thread in order to get more followers. They really want to bolster their numbers because they know that this will give them more publicity, especially now that they are technically banned in some capacities. As a frequent user of QAnon Twitter, it is incredibly easy to see this algorithm in action: a popular account was banned and in less than a week had resurfaced with 80,000 followers, a testament to QAnon's rapid promotion of members. I also found it incredibly easy to be immersed in new content, since I was frequently recommended users with obvious QAnon messages in their bios. Thus, the algorithm helps to maintain and grow social networks, all while constantly trying to break QAnon members out of their echo chamber and claim new members from across Twitter. Their YouTube videos are equally propagandist, using a litany of tags and mass postings to accumulate views and draw in users from related streams, such as people who watch conservative talk shows. YouTube happily feeds conspiratorial thought by recommending more conspiracy videos.

QAnon Content

In this section, I will provide the reader with a barrage of material that I've gathered from my ethnography of QAnon, providing light analysis of the content shown. This portion is less analytical and more focused on demonstrating the variety of content and conclusions we can accumulate from algorithmic ethnography.

The meme on the left, "The Hydroxychloro-Queen," borrows from the meme on the right, "The Chad Meme" which originally comes from 4Chan. In the Chad Meme, "Chad" represents a powerful alpha male, and he's a recognizable figure in alt-right circles for that reason. By making Dr. Stella Immanuel (bottom), a doctor who supports Trump's unfounded claim that HCQ cures COVID-19, into a "The Chad Meme," they are praising her as a powerful alpha figure in the right-wing community. Breaking down any meme requires a certain level of background knowledge, and this one demonstrates how a meme that originated on a fringe website known for its abhorrent misogyny can become a humorous image for radical supporters of a political and public health conspiracy theory.

This meme format, once used to demonstrate a playful, joking relationship between then-Vice President Joe Biden and President Barack Obama, is now used by the alt-right as a way of mocking Biden's age and alleged pedophilia.

This collection of memes condemns Hillary Clinton as a criminal who will soon be arrested for the content of her emails. It also suggests that she is planning the murder of Ghislaine Maxwell (right) who is a key figure in an ongoing investigation of a child trafficking ring. Hillary will do this in order to protect her husband Bill Clinton, just as she "killed" Jeffrey Epstein, another proven pedophile whose death has become a foundational part of QAnon. These memes invoke old Trump slogans about Hillary's criminality, invoke an image of Trump as a morally righteous patriot, and play into QAnon's theories about politicians and pedophilia. They demonstrate the ways in which conspiratorial thinking is intimately tied to nationalism and stories of elite corruption, while also showing how members use a variety of methods to convey their messages.

These final images can tell us a lot about QAnon, especially due to the rich cultural knowledge invested in memes. The leftmost image reminds us of the innate sexism and male fantasies that are a part of alt-right culture, suggesting that the woman will get naked to remove the Confederate flag. This also points to the inherently racist nature of the alt-right, and the fact that the central figure is a white, blonde woman is no mistake. We can contrast this with the image on the right, which perpetuates a common theory that Michelle Obama is secretly a transwoman, no doubt a commentary on the body types of black women, while also criticizing the news media's ridicule of Trump ("orange man"). The transphobia continues in the bottom left image, which not only reiterates that Republican women are attractive but also reminds us of the right's intimate attachment to guns, the military, and state violence.

These memes can teach ethnographers a great deal. They act as cultural substance for the alt-right community, and they demonstrate how a variety of social topics cohere in images: in just 10 memes, we see the presence of misogyny, transphobia, sexual objectification, militarism, gun ownership, racism, white nationalism, and patriotism in QAnon culture. We also see how conspiracy theories are perpetuated and spread through jokes, and how much content—humorous, political, or conspiratorial—can be embedded in cultural material. Further, we can acknowledge how these images were delivered algorithmically, and in analysis could examine their total number of engagements, as well as how they were interpreted and received by the community in question. Finally, we can test ourselves as ethnographers, attempting to see if we understand the material that is being spread. If we lack the knowledge to sufficiently unpack this material, then we obviously are missing out on content within the community.

This next meme comes from a moderately successful QAnon channel, run by a woman who only a few months ago got into Q. In it, she borrows from C+C Music Factory's (or, alternatively, Arsenio Hall's) "Things That Make You Go Hmmmm...." She has collected a number of images, many photoshopped or factually inaccurate, that, presented without commentary, act as provocations about our political system and Hollywood, suggesting that there is more than meets the eye. This type of thinking is common in conspiratorial groups, and her video acts as yet another demonstration of how conspiracy theorists use their online platforms to destabilize more rigorously vetted journalism. She can always claim that she is not an expert, and merely asking questions, but the video she puts out is making an argument, even if it does not have a particular stance.

This next set of videos comes from the #TakeTheOath movement, a brief trend in which followers of QAnon recited an oath swearing their allegiance to the United States (which is an actual oath) but added the phrase "Where We Go One We Go All" to the end, a popular QAnon slogan.

My final round of ethnographic content comes from TikTok, which is a growing platform for QAnon supporters. Having attracted the attention of teenagers interested in urban legends, a great deal of QAnon TikTok videos overlap with common "bizarre" internet videos, such as this one:

This next video plays off of a popular meme format, usually used by teenagers to express their obsession with the internet. It is unusual to see someone this young (and in-touch with viral trends) being a QAnon supporter. In it, she references two very popular QAnon indoctrination videos ("Out of Shadows" and "Fall of the Cabal").

The next TikTok comes from a viral QAnon supporter. A recent convert, she's risen to fame for denouncing her liberal past and embracing the truth of QAnon. She talks about the stigma related to joining QAnon, which shows how TikTok can be a valuable place for hearing the stories of QAnon supporters who are notoriously difficult to interview and also affirms the general notion that conspiracy theorists see themselves as victims.

Our final TikTok comes from someone explaining the logic of QAnon (notice again how it is a young woman, something likely determined by the male gaze which is often reflected in algorithms in which men can bolster the popularity of a social media post). This is a marked separation from the video we saw at the beginning from Joe M., which was 5 minutes long and barely scratched the surface of QAnon's beliefs. TikTok's limitation (videos must be short), creates room for a new type of content creator and makes the conspiracy theory more accessible to the younger audience who prefer the brevity over hours-long YouTube videos.

In this section on QAnon content, we've touched upon just a few topics that are more accessible to researchers due to algorithmic ethnography. In memes pulled from Twitter and Instagram, we've seen how we can learn about the cultural knowledge and preferences that exist in online communities, testing ourselves to unpack these rich images. These images can capture political beliefs, humor, and tastes that constitute the social fabric and unity of an online community. These images also speak to how information on the internet is deeply entrenched in an online culture unfamiliar to outsiders, and how intimately connected different platforms can be when it comes to producing new content. Videos made by users can also give us a sense of the demographics of members, or, at the very least, show us which demographics receive the most attention (in this case, young, attractive white women). They can also give us a sense of how community members understand the community, conceptualize themselves, relate to other members, and use social media to socialize and promote their beliefs. Put in conversation with the earlier images of accounts that Twitter recommends users to follow, we can see how social networks are constructed on different digital platforms. Further, in monitoring the trends, community interactions, and built-in social metrics (like retweets and view counts), we can get a sense of how community members interact with each other across platforms and within platforms. Social network analysis, content analysis, demographic analysis, analysis of information technology: all of these things are contextualized within an algorithmically-driven online ecosystem in which the ethnographer has become fully immersed. While these platforms can be webscraped, data-mined, and textually analyzed, this method puts ethnographer choice at the forefront of data collection, letting the vast virtual space become a place of embodiment and decision-making, rather than thoughtless data harvesting.

Conclusion

Many online communities participate in asynchronous, text-based communication; popular content creators who produce videos or music or podcasts may have limited engagement with their fans; and interactions between members may be restricted to private messages that ethnographers do not have access to. For all of these reasons, some robust communities online are regularly dismissed as not containing real social content, unworthy or unable to be subjects of ethnographic field work. Algorithmic ethnography pushes against this bias both within classical ethnography and ethnography of virtual worlds. It demands that ethnographers not only observe and harvest social interaction online, but participate in it, dedicating entire devices to the analysis of these communities. Cyber-ethnographers must become entrenched in the conversations, trends, and platforms of their desired community, bringing the "participation" to participant-observation on forums, websites, and apps. For privacy reasons, my conversations with community members were left outside of this analysis, but through my ethnographic investigations, I have participated in dialogues in Twitter threads, I have been added to group messages, and I have been spontaneously given suggestions for where else to look for data and information. Algorithmic ethnography demands an embodied, decision-driven form of analysis that recognizes the ethnographer as a community participant organically receiving information and dialogue as any community member would. It is a multipronged, multiplatform, and multimedia endeavor that seeks to bring digital spaces to life through enactment of community members' social impulses.

Directions for Future Research

Algorithmic ethnography has been theorized in the context of a deviant, highly political group that treats online platforms as a necessary part of their community organization. That line of inquiry alone can allow for robust analyses of the social media of online activism in different global contexts. It also allows for the theorizing of communities that are necessarily online due to their deviant stigma, especially those born on the Internet, such as ones involving video games or fandoms. However, these platforms are also incredibly new as social institutions, and this method may face extinction as the ways in which we socialize online change. For now, we can think of different ways to theorize online communities and the algorithms that mediate and make social interaction. For example, ethnographers could develop multiple online personas and experiment with different inputs into the algorithm, a process that may very well be aided by the creative use of bots online. I could put my online history in conversation with what I find on QAnon (for instance, as someone with a leftist online presence, I am always surprised when my online communities overlap with QAnon, and my non-QAnon accounts have sometimes shared content with my QAnon accounts). We can also combine this ethnography with more in-depth knowledge of algorithms and social media mechanics, should we want to theorize both the causes and effects of algorithms, something I previously denied algorithmic ethnography should endeavor to do. Surely different research contexts, backgrounds, and aims will enable this form of investigation to have a variety of uses.

Closing Thoughts

I want to end with a final clip from the same House Judiciary Committee hearing. This time, a different Republican representative questions the biased nature of information technology. His accusations of election interference are denied by Google CEO Pichai, and his Democratic colleague in the House dismisses his comments as "conspiracy," a now-common derision applied to conservatives questioning the validity of news media, elections, and information technology. Also included is another Democratic colleague's demonstration of how Facebook's algorithm doesn't stifle conservative thinking and in fact bolsters sensational and false conservative news. The clip concludes with a Democractic representative's similar argument wherein he calls the Republicans paranoid thinkers with victim complexes, something which is never really addressed as the Committee wraps up its questioning. What this all demonstrates is that content online, especially in terms of how it is disseminated, is not something that should be discussed only in relation to fringe movements because, as this clip shows, its power to mislead, antagonize, and confuse has reached the highest levels of government. The algorithms that determine our reality are worthy of investigation just as surely as they are worthy of regulation.

Credits:

Created with images by Markus Spiske - "Green Matrix rain on a screen" • Louis Reed - "A 360 panorama stitched and warped to create the tiny planet effect. Image sequence taken by drone above a community field in Wales." • SkyNews - "QAnon" • Angel Wallace - "Things That Make You Go Hmmmm....", "Q Killing the Mockingbird" • All other images are within the public domain. Names of non-public figures are censored, and credit is not provided for various images and videos to ensure privacy.