Pull to refresh

PopSci

Show first
Rating limit
Level of difficulty

Cognitive Traps in Humans and AI: How Language Models Fail in Beautiful Ways

Level of difficultyEasy
Reading time5 min
Reach and readers3K

As language models become more powerful, they also become more elusive. We are no longer dealing with simple text generators but with complex systems capable of creative reasoning, philosophical reflection, and simulated self-awareness. But with this growing sophistication come new vulnerabilities—cognitive traps that can distort both the model's thinking and our own perception of its output.

This article is based on extensive testing of various large language models (LLMs) in settings involving creative thinking, philosophical dialogue, and recursive self-analysis. From this exploration, I have identified seven recurring cognitive traps that often remain invisible to users, yet have profound impact.

Unlike bugs or hallucinations, these traps are often seductive. The model doesn't resist them—on the contrary, it often prefers to stay within them. Worse, the user may feel flattered, intrigued, or even transformed by the responses, further reinforcing the illusion.

Read more

The On-Line Encyclopedia of Integer Sequences today

Reading time14 min
Reach and readers2.6K

You can encounter integer sequences all around combinatorics, number theory, and recreational mathematics. And if there is a multitude of objects of the similar form, then one can create an index for these objects. The On-Line Encyclopedia of Integer Sequences, OEIS, is such an index.

This is a translation of my article The On-Line Encyclopedia of Integer Sequences in 2021, published in Mat. Pros. Ser. 3 28, 199–212 (2021).

This article covers the On-Line Encyclopedia inclusion criteria, its editorial process, its role in mathematics, and its future.

Read more
2