<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Social Computing | Vedaant Jain</title><link>https://vedaantjain.netlify.app/tags/social-computing/</link><atom:link href="https://vedaantjain.netlify.app/tags/social-computing/index.xml" rel="self" type="application/rss+xml"/><description>Social Computing</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Fri, 10 May 2024 00:00:00 +0000</lastBuildDate><item><title>LLMs Mimic Reddit</title><link>https://vedaantjain.netlify.app/project/redditsimulations/</link><pubDate>Fri, 10 May 2024 00:00:00 +0000</pubDate><guid>https://vedaantjain.netlify.app/project/redditsimulations/</guid><description>&lt;p>This project explores the potential of Large Language Models (LLMs) to accurately simulate user behavior in Reddit communities. We investigate if LLMs can effectively mimic the communication patterns of specific users when provided with their comment history as context, focusing on the r/science subreddit.&lt;/p>
&lt;p>Authors: Vedaant Jain*, Yoshee Jain∗, Ishq Gupta, Aditi Shrivastava, Koustuv Saha, Eshwar Chandrasekharan&lt;/p>
&lt;p>Key aspects of this research include:&lt;/p>
&lt;ul>
&lt;li>Developing prompting strategies for comment prediction and masked fill-in-the-blank tasks&lt;/li>
&lt;li>Evaluating LLM performance on style similarity (formality, syntax) and content similarity (semantics, emotions)&lt;/li>
&lt;li>Analyzing the accuracy of LLMs in replicating user-specific communication nuances&lt;/li>
&lt;li>Exploring the potential applications in automated moderation and prosocial behavior promotion&lt;/li>
&lt;/ul></description></item></channel></rss>