AI Chatbots and the Future of Internet Search

AI Chatbots and the Future of Internet Search

AI chatbots have been making waves as they become a vital part of internet search engines.

They promise speed and convenience but also come with a fair share of challenges. In this blog, we dive into some key themes and concerns surrounding these tech wonders, all based on insights from The Guardian.

Trustworthiness of Information

AI chatbots use Large Language Models (LLMs) to give us quick answers. But how reliable are these answers? The problem is that these chatbots tend to show information that sounds right or uses fancy words, even if it’s not the best source. This makes them easy to trick. For example, content creators can play around with the system to make sure their stuff pops up at the top.

The Rise of Generative Engine Optimisation (GEO)

Ever heard of SEO? It’s why certain websites show up first when you google. Well, say hello to its techy cousin, Generative Engine Optimisation (GEO). GEO is about making sure online stuff is tweaked in a way that chatbots pick it first, similar to hyping up websites for regular search engines. So, while you thought you were getting top-notch info, it might just be content that’s been perfectly tailored for AI attention.

The Direct Answer Dilemma

Another thing to think about is how chatbots often give just one answer, rather than showing different perspectives. This means people might accept what they see without questioning it, which could lead to problems with bias and manipulation. Imagine trusting a chatbot that tells you only what’s beneficial for certain groups or people.

Key Research Findings

A bunch of smarty-pants researchers from leading universities have been busy looking into how AI chatbots operate. Here’s some cool stuff they’ve found:

  • University of California, Berkeley: Chatbots like using text with lots of keywords but might not always choose the best info.

  • Princeton University: Even fake authoritative-sounding content can be made 40% more visible in chatbot results. Yikes!

  • Harvard University: Sneaky tricks, like using weird text sequences, can train chatbots to spit out specific responses.

Concerns and Implications

With chatbots, there’s a risk of being misled. Imagine companies or people boosting their content’s visibility to make you think their product or idea is the best. It could even mean a few websites hog all the online traffic, making the internet a less varied and interesting place. What’s more, just taking chatbot answers at face value might shrink our ability to think deeply or analyze information critically.

Experts Weigh In

  • Alexander Wan, from UC Berkeley, wants us to decide how we want these chatbots to help—do we just want summaries or more detailed reports like a tiny research buddy?

  • Ameet Deshpande, from Princeton, worries about how chatbots are mysterious and unpredictable, raising questions about how they pick and show information behind the scenes.

  • Aounon Kumar, from Harvard, highlights the challenges in keeping chatbots robust against sneaky manipulative tricks that are always evolving.

Conclusion

AI chatbots sure offer a lot, like making information easy to get. But they aren’t perfect. They can be fooled, rely too much on surface-level info, and give power to only a few websites. If we’re going to stick with them, we need to make sure they build trust with users, keep the web a lively place with lots of voices, and encourage smart, critical thinking.