AI is sending people to places that don’t exist

News Express |29th Sep 2025 | 111
AI is sending people to places that don’t exist




An imagined town in Peru, an Eiffel tower in Beijing: travellers are increasingly using tools like ChatGPT for itinerary ideas – and being sent to destinations that don’t exist.

Miguel Angel Gongora Meza, founder and director of Evolution Treks Peru, was in a rural Peruvian town preparing for a trek through the Andes when he overheard a curious conversation. Two unaccompanied tourists were chatting amicably about their plans to hike alone in the mountains to the “Sacred Canyon of Humantay”.

“They [showed] me the screenshot, confidently written and full of vivid adjectives, [but] it was not true. There is no Sacred Canyon of Humantay!” said Gongora Meza. “The name is a combination of two places that have no relation to the description. The tourist paid nearly $160 (£118) in order to get to a rural road in the environs of Mollepata without a guide or [a destination].”

What’s more, Gongora Meza insisted that this seemingly innocent mistake could have cost these travellers their lives. “This sort of misinformation is perilous in Peru,” he explained. “The elevation, the climatic changes and accessibility [of the] paths have to be planned. When you [use] a program [like ChatGPT], which combines pictures and names to create a fantasy, then you can find yourself at an altitude of 4,000m without oxygen and [phone] signal.”

In just a few years, artificial intelligence (AI) tools like ChatGPT, Microsoft Copilot and Google Gemini have gone from a mere novelty to an integral part of trip planning for millions of people. According to one survey, 30% of international travellers are now using generative AI tools and dedicated travel AI sites such as Wonderplan and Layla to help organise their trips.

While these programs can offer valuable travel tips when they’re working properly, they can also lead people into some frustrating or even dangerous situations when they’re not. This is a lesson some travellers are learning when they arrive at their would-be destination, only to find they’ve been fed incorrect information or steered to a place that only exists In the hard-wired imagination of a robot.

Dana Yao and her husband recently experienced this first-hand. The couple used ChatGPT to plan a romantic hike to the top of Mount Misen on the Japanese island of Itsukushima earlier this year. After exploring the town of Miyajima with no issues, they set off at 15:00 to hike to the montain’s summit in time for sunset, exactly as ChatGPT had instructed them.

“That’s when the problem showed up,” said Yao, a creator who runs a blog about traveling in Japan, “[when] we were ready to descend [the mountain via] the ropeway station. ChatGPT said the last ropeway down was at 17:30, but in reality, the ropeway had already closed. So, we were stuck at the mountain top.”

A 2024 BBC article reported that Layla briefly told users that there was an Eiffel Tower in Beijing and suggested a marathon route across northern Italy to a British traveller that was entirely unfeasible. “The itineraries didn’t make a lot of logical sense,” the traveller said. “We’d have spent more time on transport than anything else.”

According to a 2024 survey, 37% of those surveyed who used AI to help plan their travels reported that it could not provide enough information, while around 33% said their AI-generated recommendations included false information.

These issues stem from how AI generates its answers. According to Rayid Ghani, a distinguished professor in machine learning at Carnegie Melon University, while programs like ChatGPT may seem to be giving you rational, useful advice, the way it gets this information means you can never be completely sure whether it’s telling you the truth.

“It doesn’t know the difference between travel advice, directions or recipes,” Ghani said. “It just knows words. So, it keeps spitting out words that make whatever it’s telling you sound realistic, and that’s where lot of the underlying issues come from.”

Large language models like ChatGPT work by analysing massive collections of text and putting together words and phrases that, statistically, feel like appropriate responses. Sometimes this provides perfectly accurate information. Other times, you get what AI experts call a “hallucination”, as these tools just make things up. But since AI programs present their hallucinations and factual responses the same way, it's often difficult for users to distinguish what’s real from what’s not.

In the case of the “Sacred Canyon of Humantay”, Ghani believes the AI program likely just put together a few words that seemed approriate to the region. Similarly, analysing all that data doesn’t necessarily give a tool like ChatGPT a useful understanding of the physical world. It could easily mistake a leisurely 4,000m walk through a city for an 4,000m climb up the side of a mountain – and that’s before the issue of actual misinformation comes into play.

A recent Fast Company article recounted an incident where a couple made the trek to a scenic cable car in Malaysia that they had seen on TikTok, only to find that no such structure existed. The video they’d watched had been entirely AI generated, either to drum up engagement or for some other strange purpose.

Incidents like this are part of a larger trend of AI implementations that may subtly – or not so subtly – alter our sense of the world. A recent example came in August, when content creators realised YouTube had been using AI to alter their videos without permission by subtly “editing” things like the clothing, hair and faces of real people in the videos. Netflix landed in hot water for its own use of AI in early 2025, after efforts to “remaster” old sitcoms left surreal distortions in the faces of beloved 1980s and ‘90s television stars. As AI is increasingly used to make these kinds of small changes without our knowledge, the lines between reality and a polished AI dreamworld may be starting to blur for travellers too.

Javier Labourt, a licensed clinical psychotherapist and advocate for the way travel can help boost our overall mental health and sense of connection, worries the proliferation of these issues could counteract the very benefits travel can offer in the first place. He feels that travel offers a unique opportunity for people to interact with those they might not otherwise meet and learn about different cultures firsthand – all of which can lead to greater empathy and understanding. But when AI hallucinations feed users misinformation, it offers a false narrative about a place before travellers even leave home.

More like this:

• The seven travel trends that will shape 2025

• Bollywood stars fight for personality rights amid deepfake surge

• How has the digital nomad trend evolved over the years?

There are currently attempts to regulate how AI presents information to users, including several proposals from the EU and US to include watermarks or other distinguishing features so viewers know when something has been altered or generated by AI. But according to Ghani, it’s an uphill battle: “There is a lot of work going on around misinformation: How do you detect it? How do you help people [identify] it? [But] mitigation is a more reliable solution today than prevention.”

If these kinds of regulations do pass, they could make it easier for travellers to detect AI-generated images or videos. But new rules aren’t likely to help you when an AI chatbot makes something up in the middle of a conversation. Experts, including Google CEO Sundar Pichai, have said hallucinations may be an “inherent feature” of large language models like ChatGPT, or Google’s Gemini. If you’re going to use AI, that means the only way to protect yourself is to stay vigilant.

One thing Ghani suggests is to be specific as possible in your queries and verify absolutely everything. However, he acknowledges the unique problem travel poses to this method, as travellers are often asking about destinations that they’re unfamiliar with. But if an AI tool gives you a travel suggestion that sounds a little too perfect, double check it. In the end, Ghani says the time spent verifying AI information can in some cases make the process just as laborious as planning the old-fashioned way.

For Labourt, the key to travelling well – with or without AI – is keeping an open mind and being adaptable when things go wrong. “Try to shift the disappointment [away from] being cheated by someone,” he suggested. “If you are there, how will you turn this [around]? You’re already on a cool trip, you know?” (BBC)

•AI often gives travellers inaccurate and untrue information. Sometimes it even makes up destinations entirely (Credit: Getty Images)




Comments

Post Comment

Monday, September 29, 2025 6:49 PM
ADVERTISEMENT

Follow us on

GOCOP Accredited Member

GOCOP Accredited member
logo

NEWS EXPRESS is Nigeria’s leading online newspaper. Published by Africa’s international award-winning journalist, Mr. Isaac Umunna, NEWS EXPRESS is Nigeria’s first truly professional online daily newspaper. It is published from Lagos, Nigeria’s economic and media hub, and has a provision for occasional special print editions. Thanks to our vast network of sources and dedicated team of professional journalists and contributors spread across Nigeria and overseas, NEWS EXPRESS has become synonymous with newsbreaks and exclusive stories from around the world.

Contact

Adetoun Close, Off College Road, Ogba, Ikeja, Lagos State.
+234(0)8098020976, 07013416146, 08066020976
info@newsexpressngr.com

Find us on

Facebook
Twitter

Copyright NewsExpress Nigeria 2025