Conspirador Norteño

Share this post

Some thoughts on the current state of AI from a disinformation research perspective

conspirator0.substack.com

Some thoughts on the current state of AI from a disinformation research perspective

What I am concerned about in the present and near future regarding technologies like ChatGPT and Stable Diffusion

Conspirador Norteño
Mar 31, 2023
10
1
Share
Share this post

Some thoughts on the current state of AI from a disinformation research perspective

conspirator0.substack.com

Artificial intelligence technologies grow more advanced with each passing week, and large language models such as ChatGPT and image generation models such as Stable Diffusion have progressed particularly rapidly in recent months (or, at least, progressed in ways that are publicly obvious and widely discussed). This progress has been accompanied by a mix of grandiose utopian predictions and apocalyptic fearmongering about potential social, political, and physical consequences of these technologies. While some of these concerns are overblown (ChatGPT is not going to spontaneously evolve into SkyNet, for example), recent advances in AI do present a variety of risks. Here are four of my current/near-future concerns from a disinformation research perspective:

collage of images generated with Stable Diffusion
These images were generated with a Python script that fed simple phrases into Stable Diffusion. Also, the rabbit is creepy.

Mass account creation tools will become better at generating accounts that look “real”

The combination of increasingly powerful text-to-image models such as Stable Diffusion and large language models such as ChatGPT will enable those in the business of writing mass account creation tools to substantially improve their products. Stable Diffusion runs sufficiently well on a decent Macbook to generate unique profile images for thousands of accounts per day, and large language models can provide an endless supply of organic-looking unique text snippets for biographies and initial posts. Although the output of current text-to-image models still contains plenty of “uncanny valley” artifacts that become obvious when images are closely inspected, the generated images are less obviously similar to one another than previous types of AI-generated images (such as the perennially popular StyleGAN faces), making it more difficult to notice groups of accounts created with the same tool.

collage of identical tweets
social media bots (such as these accounts tweeting about bobcats) have so far been more spammy than chatty, but large language models may change that

Social media chatbots will become more relevant

Although it’s become somewhat customary on social media platforms like Twitter to dismiss rude and argumentative accounts operated by strangers as “bots”, the role of automation in social media manipulation has thus far been largely limited to various forms of spam, and the notion of large armies of automated accounts engaging in organic-looking conversation has been mostly fictitious. The widespread availability of large language models such as ChatGPT changes this, as creating software that (for example) engages in time-wasting arguments with human social media users is now practical and within reach of pretty much anyone with coding skills and free time. This doesn’t mean that you should assume everyone you meet online is secretly a chatbot (and if you are conversing with a large language model, you’ll probably get responses sooner or later that make it clear that you’re not speaking with a sentient being) but it is reasonable to assume at this point that chatbots are or will become part of the social media landscape, simply because they’re now relatively easy to make.

Twitter avatar for @EliotHiggins
Eliot Higgins @EliotHiggins
🧵I've been taking a look at some of the sources linked in the @TheGrayzoneNews article by @LucyKomisar attacking the Navalny documentary, and boy, are there some big issues with the sourcing. Komisar's research has been aided by AI, and not the smart kind
Image
12:46 PM ∙ Mar 14, 2023
3,684Likes820Retweets

News articles that partially or fully consists of non-factual AI-generated text will become increasingly common

On March 13th, 2023, fringe news website The Grayzone published an article about the documentary ‘Navalny’ that upon closer inspection turned out to contain AI-generated references. The article in question was subsequently edited and then removed by The Grayzone, but the use of misleading AI-generated text in news articles will likely occur with increasing frequency; Buzzfeed News, for instance, has been experimenting with AI-generated travel articles. Generating massive amounts of text provides a convenient option for astroturfed “news” websites that currently operate by recycling or plagiarizing content to create the illusion of a large organization with a sizable staff (and some such sites have already been using AI-generated “photos” of their “authors”).

image generated by Midjourney AI of the nonexistent "2001 Great Cascadia Earthquake"
close inspection of various aspects (weird height/positioning of the light poles on the left, for example) reveal that this image is artificially generated

AI-generated images can generally be “debunked”, but deepfakes of shocking events that look real at first glance will still go viral

In the past few weeks, images produced by AI image generator Midjourney of everything from Donald Trump’s arrest, an earthquake that never happened, and the Pope in a puffer jacket have gone viral on social media. The creators of these particular images clearly labeled them as synthetic when they shared them, but at least some of them look sufficiently “real” that they might well have gotten significant traction had they been disingenuously presented as real photos. For example, a synthetic scene of urban destruction such as the image above of the “2001 Great Cascadia Earthquake” could potentially be portrayed as a scene from the aftermath of the recent earthquake in Turkey or damage from a Russian missile strike in Ukraine. Although the artificial origin of such an image would likely be uncovered once someone scrutinized it closely, it is nevertheless reasonable to assume that misleading images of this sort will go viral from time to time (and people will “believe” or otherwise be emotionally affected by them) since dramatic images tend to be shared profusely and debunking takes time.

10
1
Share
Share this post

Some thoughts on the current state of AI from a disinformation research perspective

conspirator0.substack.com
1 Comment
Mason Pelt
Writes Ramblings By Mason Pelt
Mar 31Liked by Conspirador Norteño

The combination of dumb Ai and human behavior is going to end us all.

Expand full comment
Reply
Top
New
Community

No posts

Ready for more?

© 2023 Conspirador Norteño
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing