Skip to content

AI Summer, AI Winter

Jesse's Weekender #7, November 11 - 17

I appreciate how optimistic people are about text generation tools like ChatGPT and Claude. Personally, I'm concerned about production use of models in any form that do not have strict oversight rules and accountability of training data—especially in digital social spaces.

It feels like we need transparent, international controls over the data used to train models and the algorithms through which content, recommendations, and inferences are provided to the general public, otherwise what is bound to happen is commercial interests (which a U.S. president has already admitted are more important than peace) will create massive amounts of pseudo-signal in digital spaces, on the one hand capitalizing on psychological effects of exposure and social proof to sell products, and on the other hand, carrying out and exacerbating the outcomes of political disinformation campaigns.

But strict controls and transparency over training data won't be enough, since the general public is unlikely to ever have the requisite time and energy to inspect the data and recognize when models have been trained for lawful evil purposes and then petition their government for a redress of grievances in a way that will lead to positive legislative action toward healthy digital communities. (I think this task will be relegated to the fringes of society just like it is now, with journalists from big corporate outlets really only interested in these topics as a means of capitalizing on controversy.)

So what do we do? How do we prevent information pollution in digital spaces when commercial interests and state actors have both the means and motive to carry out widespread campaigns of social influence? Would we need to reconsider how we as people, corporations, and governments treat digital spaces—perhaps considering them as "the means of connectedness" to drive home the distinction between human digital connectedness as a tool for interpersonal communication versus a tool for mass influence? Is this even possible under our current socioeconomic systems?

I've always wondered what would be different if we treated online public spaces like national parks. What would we allow and not allow? What could people count on—and what could they trust (and why) about existing in that space and sharing information with each other?

As generative text models grow in complexity and use, I don't find myself cautiously optimistic like I thought I would be. Instead, I'm only feeling cautious. I know good people with great imaginations are using this tech and experiencing the same thrill they felt when they first used a cell phone (back in the day it was mind-blowing to be able to call someone from anywhere). On the other hand, I also know that there are bad people with limited imaginations using this tech who are seeing an advanced method of exploitation in the name of profit.

Thoughts from this week:

  • I'm taking some time off between jobs to work on projects that bring me joy, so I'm not going through the normal patterns of job-seeking mental self-flagellation (yet), but the few times I do check who is hiring and for what I am disheartened to find so much effort to repurpose, rebrand, and revitalize things by slapping "AI" on it and pretending like novel technologies have finally been discovered.
  • TikTok is a dangerous place for gathering data to form opinions about the world around us. The algorithm is designed for entertainment, not information, so what ends up happening is you get stuck in a rabbit hole of content that only confirms your biases and conforms to your pre-existing worldview. There is no way for a user to "break out" of the algorithmically curated silo that the entertainment algorithm puts you in.

Around the Net

  1. Taylor Lorenz sheds light on a recent Pew Research report, connecting the trail of information over the last half of a decade that all point to an egregious hypocrisy coming from right-wing influencers:
“There is no evidence to support the claim that the major social media companies are suppressing, censoring or otherwise discriminating against conservatives on their platforms,” said Barrett, the disinformation researcher who worked on the report. “In fact, it is often conservatives who gain the most in terms of engagement and online attention, thanks to the platforms’ systems of algorithmic promotion of content.”
The majority of news influencers are conservative men, study finds
The latest Pew Research report on news content creators paints a damning picture of our skewed information ecosystem

  1. Wes Davis shares recent academic research that seems to indicate X's content algorithm is biased toward right-wing content
The study’s results are similar to other recently reported findings by The Wall Street Journal and The Washington Post of potential right-wing bias in X’s algorithms.
A study found that X’s algorithm now loves two things: Republicans and Elon Musk
X boosted Musk’s posts as he engaged with the election.

  1. Stephen Robinson articulates what a lot of us are feeling about how social platforms—including the media outlets that use them—are largely responsible for the recent political outcomes we find ourselves still coming to grips with:
Legacy media shoulders significant blame for their “sanewashing” of Trump’s incoherency and deteriorating mental state. Voters believed Trump could fix a steadily improving economy despite his promotion of inflationary tariffs. The media even presented Trump’s rants as cogent discussions of economic theory.
How a right-coded media environment helped Trump win
TikTok, X, and the manosphere are pushing young voters away from Dems.

This issue of Weekender, along with everything else I've worked on this week, would not be possible without you. I left my corporate job at the beginning of October 2024 to focus on my work—video essays on TikTok, written essays here on my blog, and open-source development of social technologies. Your continued financial support enables me to continue on.

—Jesse

Comments

Latest