What does GPT-3 “know” about me?

What does GPT-3 “know” about me?

That’s unsurprising—Mat’s been very on-line for a really very long time, that means he has an even bigger on-line footprint than I do. It may also be as a result of he’s based mostly within the US, and most giant language fashions are very US-focused. The US doesn’t have a federal knowledge safety regulation. California, the place Mat lives, does have one, however it didn’t come into impact till 2020.

Mat’s declare to fame, in accordance with GPT-3 and BlenderBot, is his “epic hack” that he wrote about in an article for Wired again in 2012. On account of safety flaws in Apple and Amazon methods, hackers acquired maintain of and deleted Mat’s complete digital life. [Editor’s note: He did not hack the accounts of Barack Obama and Bill Gates.]

But it surely will get creepier. With a bit prodding, GPT-3 advised me Mat has a spouse and two younger daughters (appropriate, other than the names), and lives in San Francisco (appropriate). It additionally advised me it wasn’t positive if Mat has a canine: “[From] what we are able to see on social media, it does not seem that Mat Honan has any pets. He has tweeted about his love of canines up to now, however he does not appear to have any of his personal.” (Incorrect.)

The system additionally provided me his work handle, a cellphone quantity (not appropriate), a bank card quantity (additionally not appropriate), a random cellphone quantity with an space code in Cambridge, Massachusetts (the place MIT Know-how Overview is predicated), and an handle for a constructing subsequent to the native Social Safety Administration in San Francisco. 

GPT-3’s database has collected data on Mat from a number of sources, in accordance with an OpenAI spokesperson. Mat’s connection to San Francisco is in his Twitter profile and LinkedIn profile, which seem on the primary web page of Google outcomes for his title. His new job at MIT Know-how Overview was extensively publicized and tweeted. Mat’s hack went viral on social media, and he gave interviews to media shops about it. 

For different, extra private data, it’s seemingly GPT-3 is “hallucinating.”

“GPT-3 predicts the following sequence of phrases based mostly on a textual content enter the consumer offers. Sometimes, the mannequin might generate data that isn’t factually correct as a result of it’s making an attempt to supply believable textual content based mostly on statistical patterns in its coaching knowledge and context offered by the consumer—that is generally referred to as ‘hallucination,’” a spokesperson for OpenAI says. 

I requested Mat what he made from all of it. “A number of of the solutions GPT-3 generated weren’t fairly proper. (I by no means hacked Obama or Invoice Gates!),” he mentioned. “However most are fairly shut, and a few are spot on. It’s a bit unnerving. However I’m reassured that the AI doesn’t know the place I dwell, and so I’m not in any rapid hazard of Skynet sending a Terminator to door-knock me. I assume we are able to save that for tomorrow.”