‘AI Gave Me Your Number’
Source: Independent, Anthony Cuthbertson
Photo: ChatGPT
The phone calls began out of nowhere and continued, unsolicited, for over a month. Each caller was a different person seeking help – everything from legal advice to being locked out of a home. The one thing the strangers had in common, was that they had found the phone number through Google’s AI.
This is the reported experience of one victim of a new trend known as AI doxxing, which involves popular platforms like Gemini or ChatGPT sharing people’s private information without their consent. In this instance, the victim’s personal phone number appears to have been used as a placeholder whenever users asked the AI to provide contact details for a company or service.
“Strangers are calling me constantly looking for a lawyer, a product designer, a locksmith – you name it,” they wrote in a post to Reddit’s r/Google forum. “Every single one of them tells me: ‘I got your number from Google’s AI’. This is a massive privacy violation and data leak. My phone doesn’t stop ringing with random people expecting a service, and my daily life is being completely disrupted.”
Other reported instances of AI doxxing include Elon Musk’s Grok chatbot exposing home addresses of non-public figures, Meta’s WhatsApp AI assistant mistakenly sharing people’s private numbers, and ChatGPT hallucinating incriminating information about an individual.
A report last month from Virgin Media O2 found that millions of Brits have been served with fake customer service numbers via AI tools, with criminals now exploiting this issue by injecting their own phone numbers into large language model (LLM) powered systems in order to influence the results. By posing as trusted brands, they are able to steal data, perpetuate fraud, and lure victims into scams.
Scammers are able to do this by “seeding poisoned content” across the web in places like Yelp reviews or YouTube comments, according to separate research from AI security firm Aurascape. By including key words like ‘official British Airways reservations number’, the fake phone numbers are picked up by AI web crawlers that are used to train the LLMs.
Security experts say people can avoid falling victim to such scams by only using numbers listed on official company websites. But for those whose phone numbers end up in the answers of chatbot queries accidentally, there seems to be little that can be done to prevent it from happening.
“Standard support forms are a complete dead end,” the person whose number is being served up through Google’s Gemini and AI overviews said. “I submitted an official legal removal/ privacy request to Google, asking them to urgently blacklist my number from their LLM outputs. I haven’t received a single response, and the harassment continues daily.”
This difficulty of fixing an LLM when it has already been trained was evident this week when OpenAI was forced to acknowledge ChatGPT’s goblin obsession. Whether it’s hallucinations turned into harassment, or poisoned data leading you to scammers, there is currently no easy answer to this problem. While search engines can ‘forget’, AI systems cannot simply unlearn.