Sunday, January 21, 2024

How to create completely undetectable AI content with this amazing tool

Google will have no clue if it's written by an AI and neither were humans. Want proof? An AI content piece pulled a 100% guaranteed AI score before I worked my magic on it. Then after I used just one of the techniques I'll show you in this video, we now have 100% original. Okay, cool, so I can pass AI detection tools. So what? How does that translate into anything useful? Well, when you can produce high quality undetectable AI articles, you can generate enormous amounts of content like it's no one's business, which is gonna sprout a massive traffic for your website so you can then start printing money like Papa Powell. Now, there's multiple ways that people claim they can beat AI detection tools, such as rewriting the content with QuillBot, the emoji trick, and quite a few more. I'm gonna test each of them out to see which ones work. But first, let's generate some AI content and plug it into each of the detection tools Generate AI Content so we can get a baseline. To avoid ChatGPT lag time and usage blackouts, which have been rampant as of late, I'm gonna use OpenAI's playground to generate my test content. Here's my prompt and thank me later if you're an '80s fan. Write me an article on "My Big Trouble in Little China" is a better movie than "Top Gun." Bam, here's the content. Now let's go on over to Originality.ai to see if it busts me red-handed for using AI. This is a paid tool so let's hope it diagnoses it correctly. You plop in the content here, let it run, and here we go. Originality says there's a 98% chance of being AI content. Good for you, Originality. Now let's toss it into aicheatsheet.com, one of the free tools. As expected, 100% certain the text was written by AI. And lastly, let's test out OpenAI's own detection tool, AI Text Classifier. Okay, then the classifier considers the text to be very unlikely AI generated. Pretty sus that OpenAI's own tool misses the mark, but noted. So here's where we stand. Article one is 98% likely to be AI on Originality, 100% AI on Cheat Sheet, and unlikely on OpenAI's own tool. Let's keep going and create a few more test cases. Keeping up with the retro theme, OpenAI, write me an article on why rollerblading is dead. 4% chance it was AI according to Originality, 100% on AI Cheat Sheet, and unclear according to OpenAI. So putting all these results together so far, we're starting to see some inconsistency at least with the first tool. OpenAI's tool is still lost AF. Before we start tricking these tools, let's generate one more test case. Write me an article on proper form for bench press. AI, do you even lift, bro? Apparently it does, and here's the content. So this time, we have Originality with 99% AI, AI Cheat Sheet with 1%, and OpenAI itself still confused. Here's our baseline. Ultimately, both Originality and AI Cheat Sheet are inconsistent, and OpenAI's tool can probably just be ignored. Let's start to do some magic tricks to see if I can improve these scores. But first, make sure to sign up for my free SEO training masterclass by using the link in the pinned comment. It goes over all the magic I'm doing today to get sites to the top of Google. Now, back to the show. The first AI undetection technique, is that even a word? Anyways, the first technique I'm gonna use is QuillBot for rewriting your content. You do this by going to QuillBot, dropping in your content, and then hitting paraphrase. Once I put in the '80s movie AI text, we get this new content on the right. Let's just copy this new text and paste it back into Originality. Well, well, well, Originality said it was 98% fake before, now it's 95% real, that was easy. Now let's drop into AI Cheat Sheet, 100% AI content before, and now it's 59% chance of its human written, or in other words, 41% chance of it's AI. Then we have OpenAI itself with possibly AI generated. That's actually an improvement for the tool from their previous unlikely classification. Bringing the same QuillBot process over for the two other test cases, we end up with the following results, which are interesting. If the articles were previously super detectable by the tools, then QuillBot is for sure gonna improve it. But if they were already passing with flying colors, QuillBot will likely make it worse, which is expected. If it's not broken, don't try to fix it. QuillBot requires a paid account if you input any significant amount of characters. So let's move over to testing some of the free techniques. The following few are all under the umbrella Free Techniques of telling the AI not to write like an AI, starting with perplexity and burstiness. What? - There's this post on Medium where the author shared the following prompt to pre-train your AI before it starts word vomiting. Hey ChatGPT, regarding generating writing content, two factors are crucial to be in the highest degree, perplexity and burstiness. Perplexity measures the complexity of the text. Separately, burstiness compares the variations of sentences. Humans tend to write with greater burstiness, for example, with some longer or more complex sentences alongside shorter ones. AI sentences tend to be more uniform. Therefore, generated text content must have the highest degree of perplexity and the highest degree of burstiness. The other two factors are that writing should be maximum contextually relevant and maximum coherent. Take this word block, drop it into your AI tool, and then ask it to create a content. Based on the above, write me an article on why "Big Trouble in Little China" is a better movie than "Top Gun." Originally says it's 78% AI, which is an improvement from the 98% before, AI Cheat Sheet went from 100% AI to only 11% AI, and OpenAI itself went from possibly to likely. Adding in the results for the other two test cases, we end up with a table like this. I'd say that this perplexity prompt is pretty damn inconsistent. It did decently for the first test case but bummed ass on the second two, skip. By the way, if you're curious as to why we even care in the first place about passing AI detection software, first off, yes, it's very important, and I'll dig into why after I finish up these tests. Next, we have another pre-training prompt that tells the tool how to write like a human. Write as Human Try to sound like a human writer, writing for a blog, writing in the first person giving advice. Try to sound unique and write in an unpredictable fashion that doesn't sound like GPT3. Here's how it performed. For test cases one and two, it screwed the pooch, but for the bench press content, it did really well. How about if we asked the AI itself how to write undetectable AI content? What are the key attributes of conversational content that is undetectable as being written by an AI. OpenAI tells you that this content would include human-like grammar, punctuation, and spelling that would look human, context-based content creation, and so forth. And then you ask it to generate your content. Now we're talking. Based on all the simple pre-training exercises we've used so far, asking the AI directly how to pass detectors gives the most consistent result. What if we ask the almighty AI how to fix content that's already written? Let it first write your content in vanilla mode. Then you ask it to rewrite the above content so that it is not detected as AI content by AI content writers, and here's a result. Winner, winner, chicken dinner. This is our best performing technique so far, simply asking the AI to rewrite itself for passability results and great scores for all the tools. - Whoa. - Thank you to Affiliate Lab member in Chiang Mai buddy, Chris Manak, for that tip. Next, if you haven't already seen my case study where I broke 50K traffic with a pure AI site, make sure to check it out after you finish here. Link in the description. In that video, I mentioned that I've been using a beta version of Surfer's upcoming AI tool that will generate AI content on the fly that's actually SEO-optimized. It looks at the top ranking articles to determine how to create the article outline, how long to write the content, and what entities and critical keywords need to be in your content, and at what frequencies. It actually does a lot more than that but let's see how detectable it is. Bear in mind, the Surfer tool generates full articles and longer content has a much higher chance of detectability, so here goes nothing. The Big Trouble article hit 28% AI on Originality, AI Cheat Sheet guarantees it was a human who wrote it, and the OpenAI tool choked up on it probably because it's too long. - That's what she said. (chuckles) - Ladies and gentlemen, we have a new champion. The Surfer content bamboozled these detection tools better than any other technique so far. Now there's something I wanna mention. I gotta admit that was more than a bit nervous putting the Surfer tool to the test, not only because I'm an investor, but because I'm putting this content on my freaking websites. So seeing these results was a huge relief. And if anyone would like to verify that I'm not making these numbers up, I'm happy to share the content with you so you can run the tools yourself. If you're interested in applying for the Surfer AI beta test, use the link of the description. The next technique we have is a fun one Emoji & Comma Tricks and that's the emoji trick. You ask it to generate your content but insert an emoji after each word. And then when it's done, you ask it to remove the emojis. While the emoji trick did perform super well in the bench press article, it definitely fell short in the other two. In my opinion, this isn't consistent enough, so let's call this myth busted. Next, the comma trick. Write me an article on proper form for bench press, but remove as many commas as possible. The comma technique seemed to work on Originality and OpenAI's detector in a test case or two, but it's definitely not consistent enough to add to your process. Now we're about to get into why this all matters, but let's take a look at the final results. The winners are clear. Surfer's AI performs the best, followed by the rewrite the content technique, and then good old, you tell me how to write human-like content. Google has gone back and forth on their stance on whether or not AI content is against their guidelines or not. In April of last year, they were like, "Nah bro, AI is fully against our guidelines. Specifically if you're using machine learning tools to generate your content, it's essentially the same as if you're just shuffling words around or looking up synonyms or doing the translation tricks that people used to do. And that doing that is still automatically generated content, that means for us, it's still against the webmaster guidelines." But then in February, they released an official statement in Google Search Central and said they'd reward high quality content however it's produced, okay, but that's not surprising. They have to take the stance 'cause they're clearly gonna be using AI themselves in their search results to compete with Bing's ChatGPT integration. And Denny Sullivan clarified this with quote, "Content written primarily for search engines rather than humans is the issue. If someone fires up 100 humans to write content just to rank or fires up a spinner or AI, same issue." Sure, I agree that the ultimate goal Why This All Matters would be to create content that humans would enjoy and encourages them to do what you would want them to do on your website, namely generate you money through ads or conversions. But ultimately, as a search engine professional, I'm obviously producing this content so it performs well in Google or any other search engine for that matter. But there seems to be an unclear line that you're not supposed to cross. In the Affiliate Lab, we have a private mastermind group with the top performers who have sold businesses for six figures or more. One of the members, Dave Gibbons, has a theory which I gravitate to the most. "Google will treat it just like it does with plagiarism or duplicate content. AI will be allowed but it will be deemed a lower quality article than a unique piece with equivalent other SEO factors." I agree 100%. Google needs to incentivize people for creating real original content. Otherwise, what in God's name will the AI be able to scrape its information from? So if you're gonna be using AI in your SEO strategy, the name of the game is to make it appear human. I told Dave that this was a very good theory. After which, he sarcastically responded with, "If only I was in a group with someone with an engineering background that could build a hypothesis from it and do a (beeps) load of tests and then let me know the reality." Touche, my friend, touche. And that's what I'm up to right now. I'm running single variable tests right now to see how AI content performs against human content in the actual search results.

No comments:

Post a Comment