Large Language Models Meet Cognitive Science

in

You might have heard of LLMs before, but if not, let me break it down for you: they’re basically these fancy algorithms that can read and understand text like a human would. And now, researchers are trying to figure out how our brains work by studying them!

But wait, isn’t cognitive science already a thing? Yes, my dear Watson (or should I say, DeepMind), but this is different. With LLMs, we can analyze massive amounts of text data and identify patterns that might not be visible to the naked eye. And who knows what kind of insights we could gain from that!

So how does it work? Well, let’s take a look at an example. Let’s say you have this long essay on the history of jazz music. You feed it into your trusty LLM and ask it to identify the most common themes or motifs in the text. The LLM will then spit out some fancy statistics that show which words, phrases, or ideas appear more frequently than others. You’ve got yourself a new research paper on jazz music trends over time.

With LLMs, we can also simulate human samples and test our theories in a controlled environment. For instance, let’s say you want to see if people prefer happy or sad endings in novels. You feed your LLM with thousands of book summaries and ask it to generate two different sets: one with happy endings and another with sad ones. Then, you can test which set is more popular among human readers!

Now, I know what some of you might be thinking “But isn’t this just cheating? Aren’t we replacing actual humans with machines?” And to that, my dear Watson (or should I say, DeepMind), I say: yes and no. Yes, LLMs can help us speed up the research process and analyze more data than ever before. But they cannot replace human creativity or intuition. They are just tools that we use to enhance our understanding of cognitive science.

SICORPS