I’ve heard this theory. Feels like unrealistic hopeful wishes of people who want AI to fail.
LLM processing will be a huge tool for pruning and labeling training sets. Humans can sample and validate the work. These better training sets will produce better LLMs.
Who cares is a chunk of text was written by a human or not? Plenty of humans are shit writers who believe illogical or clearly incorrect things. The idea that human origin text is superior is a fantasy. chatGPT is a better writer than 80% of humans todat. In 10 years LLMs will be better than 99.9% of humans. There is no poison to be avoided.
chatGPT has an apparent style when used in the default mode, but you can already get away from that with simple prompt tweaks. This whole thing is a non-issue.
Youtube ads are such garbage. Everyone talks about how google is ‘the most advanced advertiser’ - well google, you really can’t figure out that playing the same ad for me 4 times in a 30 minute period is just going to make me hate both you and the advertiser?
If any state banned advertising entirely, I’d strongly consider moving there.
I think OpenAI’s own chatGPT detector had double digit false negative and positive rates. I expect as diversity of LLMs proliferates, it will become increasingly harder to detect.