Generated For You

11 Aug 2023 - Jack Hullis

Social media platforms are already using advanced recommendation algorithms to accurately predict what users will want to see next. In the age of AI, we should consider how artificial intelligence could be integrated with these algorithms to generate personalised content and how this might potentially intensify the psychological biases that content exploits today.

The predictions made by recommendation algorithms are used to find and serve users with content theyā€™ll find engaging. This is a system which benefits both the user, who only sees relevant and interesting content, and the platform, which profits from having more engaged users to advertise to.

TikTok is an example of such a system. When users first start using TikTok they are shown a huge variety of videos. Based on engagement metrics including video watch time, replay frequency, and interactions such as likes and shares, TikTok can make an educated guess at how interested the user was in that type of video.

TikTok also predicts user interests by analysing data from other similar users. Think ā€˜people who bought X also bought Yā€™ on sites like Amazon. Grouping users together means more data, which allows for more accurate predictions. Over time, by optimising for engagement, TikTok can learn the types of content that a user does and doesnā€™t like.

If users are only shown content that they like, they will spend more time engaging with the platform. As noted earlier, media companies like TikTok profit from increased user engagement as it allows them to sell more advertisements. This is why attention has become the currency of the internet. Platforms must fight for usersā€™ attention, causing them to use increasingly aggressive tactics in order to attract the most users and to boost engagement.

Implementing AI

As the accuracy of recommendation algorithms improve, the limiting factor for increasing user engagement quickly becomes the pool of content. Even if the algorithm knows exactly what the user wants to see, the process of finding and matching what the user wants is extremely difficult. And that is assuming it even exists.

Generative AI can be used to solve this matchmaking problem: instead of searching for the content, the recommendation algorithms could be used to prompt the generation of it. Providing the algorithms are accurate, the generated content would likely better fit our tastes, and come in infinite supply.

Generative AI has already demonstrated its ability to produce text, voices, audio and music, images, videos, and now even full-length TV series episodes. AI tools also exist which can edit longer videos into multiple shorter videos, suitable for TikTok or YouTube shorts. These tools assist users by finding interesting segments, cutting them up, cropping them, adding facial tracking, and animating subtitles.

The Risks

Given this control, what new types of content will AI come up with? Already we occasionally see human-generated content that we find engaging yet hard to explain. Recent examples include pinkydoll, sludge content, and mukbang videos. Will AI discover more of these niches? Will the content we enjoy become progressively weirder and more normalised? Will we find traditional human-generated content boring in comparison?

And if the answer to these questions is yes, where will it stop? Will content become so unrecognisably optimised to the point of maximum engagement? Perhaps it will discover a seemingly random assortment of colors in a way we find mesmerising by exploiting our brainā€™s reward system. While it seems farfetched, this sort of behavior is observable in animals, like a cat chasing a laser, or a hamster running in a wheel. Could AI content potentially exploit human psychology in a similar way?

If these systems produce content optimised for engaging specific users, what we are really doing is building personalised echo chambers called filter bubbles. The content users see will not necessarily care about the truth, or about being educational, or even about making us feel good.

Through focusing only on increasing engagement, recommendation algorithms already inadvertently exploit biases like negativity bias. Take doomscrolling, where people spend ā€œan excessive amount of time reading large quantities of negative news onlineā€, a popular habit of users on platforms like Twitter and Reddit that has been linked to a decline in mental and physical health. Content that is generated specifically for us will be able to learn to play into our biases much more effectively.

Generated content is a huge opportunity for us to leave behind an engagement driven internet and to move towards one which is optimised for more humanistic goals like truthfulness and mutual understanding. Considering how much time we collectively spend consuming media, ensuring that it is educational as well as entertaining would be massively beneficial for individuals as well as society. The content we consume should prioritise our mental health, working to make us happier and more optimistic, and should showcase opposing views to avoid the creation of filter bubbles. If we found a way to optimise for these humanistic goals rather than just engagement, personalised AI generated content could have huge positive impacts by correcting many of the modern internetā€™s negative tropes.

Return to blog

Comments

Comment system powered by GitHub Issues. Post a comment here, and it'll show up below after you reload the page.

Post comment
Loading...