Need 300 words? 500? 1000? Even 5000? This one-shot prompt will save your sanity if you’re trying to meet a word limit with ChatGPT
Struggling to get ChatGPT to stick to specific word counts? I got you. Here’s a prompt to make ChatGPT hit those exact word targets — no more, no less 🎯 In this article, I’ll walk you through my innovative method that encourages ChatGPT to meet word limits with pinpoint accuracy.
It also greatly increases the size of ChatGPT’s replies!
I don’t say this lightly, but I did something pretty groundbreaking in prompt engineering this week. If you read my article, you’ll learn to:
- Programmatically force ChatGPT to hit word limits, from 100 to 5000
- Use segmentation and memory settings to prevent word-count chaos
- Implement Python code to keep ChatGPT on track, no matter what
- Break free from the endless cycle of undershooting and overshooting

Why ChatGPT Sucks at Meeting Word Length Goals
“Jim,” I’m asked on a regular basis, “why doesn’t AI stick to word limits?” As far as common complaints about AI limitations go, it’s right up there with: “Why does AI lie?”, and “Why does ChatGPT keep saying the word ‘delve’?”
Why “delve” is the most obvious sign of AI writing
AI text generators favor the word “delve”. Now we know why.
There are a couple of reasons why AI can’t keep track of word counts, but first we have to manage expectations of how we think about AI. It’s not an “Intelligence”, artificial or otherwise. It’s a language prediction engine. It’s not a calculator (although it can give the most probable sounding answer, which is often right — i.e. like how we hear 1+1=2 enough times that it’s a rote phrase we can recite verbally without doing any mental calculation).
ChatGPT works with words and strings, not math. It can run an equation through Python code—and its neutral net algorithms are obviously math-based in order to predict the next word—but the LLM doesn’t know math.
Numbers, to chatGPT, are just another string of characters, no different than letters or punctuation. You wouldn’t use an electronic dictionary to solve a Sudoku puzzle. Sure, it knows all the numbers, but it has no idea how to put them together. LLMs have no concept of number or quantity.

Why Your Word Limit Doesn’t Compute With AI
Additionally, ChatGPT doesn’t process words like you or I (or even like a Word document, which we’re familiar with keeping score while we type). Instead, AI breaks down every word into smaller units: tokens. Tokens can be parts of words, or whole words. So if you prompt “Write 200 words,” AI can only process it through tokens — which don’t always map to full words
If you ask ChatGPT to write a specific number of words, it doesn’t think of the task like a human would. It doesn’t count in a conscious sense. There’s no internal mechanism keeping track of how many words still need to be generated. It’s not sitting there going, ‘Okay, that’s 152, I need to hit 200.’
Plus, ChatGPT doesn’t verify its own work. It’s not going back to make sure it hit your magic number, because the system was designed for efficiency. It wants to wrap things up quickly, like a coworker who leaves at 4:59 p.m.
Who Needs Accuracy When You Can Be Fast?
In fact, AI platforms are so expensive to run that many services, like Apple Intelligence, are designed to put brevity above accuracy in delivering text. Accordingly, you may have noticed that AI tends to undershoot your target.
Word count requests can give a general sense of how succinct or in-depth a response needs to be (because there’s a difference between how a 100 word essay or a 1000 word one is written; i.e. the type of language involved and the expected level of detail), but an AI is more concerned with delivering something that makes sense than sticking to your nominal word budget.
Also, an AI doesn’t plan ahead. It predicts what words come next based on probability and generates them in rapid succession. When a human writes to a word limit, there’s a sense of structure — we’re cogitating about how to pace ourselves, what key points need to be hit, and how to wrap things up neatly by the end. We can see the goal and adjust our pacing accordingly.
ChatGPT Doesn’t Care About Your Word Limit (But I Do)
With all these factors stacked against it, is it any wonder ChatGPT can’t write to word counts? It’s not a calculator. It’s unable to keep tabs on how many words it’s churned out. It doesn’t think of ‘words’ as neat little units, and once it gets going, it plows ahead, focused on efficiency over precision.
Ironically, it’s trying to sound natural, not hit an arbitrary target it doesn’t understand. After all, it has no idea why that exact number matters to you.
However, AI users might need exact word counts for a variety of reasons, whether it’s to meet a college assignment (naughty little cheats, but that’s another story), or hitting word limits on corporate reports, meeting strict editorial guidelines, sticking to an article brief, complying with character limits on social media, or adhering to alt-text constraints for accessibility.
The Satisfaction of Outsmarting AI With Prompt Wizardry
In certain professional settings, people get obsessed with exact word counts — whether it’s to fit in a layout or to satisfy the rituals of SEO.
Or my personal favourite: the intellectual puzzle. As a prompt engineer, I love getting AI to perform tasks it wasn’t designed for, like hitting a word count dead-on. That’s why I come up with my twisty little prompt hacks!
There’s something strangely satisfying about pushing AI to its limits. It’s getting a vending machine to drop two snacks instead of one. It’s magic. Sometime I even like to use prompt hacking to be a bit counter-culture:
Using prompt injection to break big brands’ AI advertising gimmicks
Coca Cola used AI to deliver holiday magic. It was easy to hack their brand assets.
Getting ChatGPT to Count Words in the First Place
Anyway, while trying to solve the “How many Rs in Strawberry?” question (quick recap for new readers: it’s a notorious request that throws ChatGPT for a loop; it can’t count three Rs), I realized I had stumbled upon a solution for word limits at the same time. You see, part of the Strawberry problem lies in how the model tokenizes words, and the other part is ‘momentum’.
I initially solved the Strawberry puzzle by eliciting ‘System 2 thinking’ with the word ‘ruminate’, which makes ChatGPT slow down and think. When it obeyed this request it correctly counted three Rs. But you could see (in the response time) that the model often considered it too straight forward and ignored the request for deeper thought. So I came up with a more rigorous solution: implanting a Memory to use the code interpreter to count letters.
How ChatGPT’s Memory Settings Can Improve Counting Accuracy
Memory lets you set up custom behaviors and preferences. Want ChatGPT to stop using certain words (I’m looking at you, ‘delve’), to always write in your tone, or to know details about you? Memory gives ChatGPT a cheat sheet, so you don’t have to prompt these facts again every time you chat.
If you have automatic Memory on, it remembers how you like things, and certain contextual facts about you. It’s like handing ChatGPT a sticky note.
Memory doesn’t “train” ChatGPT, but it does allow you to create custom instructions that persist across your conversations. These rules become part of the initial prompt whenever you start a chat — the invisible spiel that primes how AI will behave in any interaction. That’s right: every time you open a new conversation, ChatGPT reads these before anything starts. It may surprise you that we don’t get the first word when chatting with AI!
Thanks for the memories: How praise activates memory settings in ChatGPT
ChatGPT can remember your preferences… but it’s more likely to spontaneously honor them when you show appreciation
Memory is available to ChatGPT Plus, Team, Enterprise users, and some Free users (it’s rolling out to everyone soon). Simply hop into Settings and turn it on under custom instructions. Now it will “Memorize” certain facts and infer preferences from your prompts; any time this happens you’ll see a little “Memory Added” notification. You can use Temporary Chat to skip memory settings any time you want to go incognito, or just begin a chat with Add nothing to memory for this convo
if you want to use stored memory but not add to it. And you can always implant a memory if it doesn’t activate automatically by just prompting Add this to memory
.
Solving the “2 Rs in Strawberry” Problem
The memory I added for the Strawberry problem was the following:

It works perfectly; while I loved my “ruminate” prompt and still feel it gives superior — and longer — responses for deeper reasoning tasks, the memory hack for word counts is more unfailing for tasks that need strict precision:

Dodging Tokenization Like a Pro, Thanks to Code Blocks
But guess what? I found this also bypasses the tokenisation problem, and allows ChatGPT to give accurate word counts of passages by putting them in code blocks. You’ll remember: half the struggle of getting ChatGPT to adhere to word counts is that it can’t actually count words, and makes up final numbers if pressed. With my memory setting, it treats passages and it’s own text output as a piece of code or raw data, so it’s immune to the tribulations of tokenization. It now works as good as an online counter:


You’re going to need my Memory for the next steps, so here it is. While I’m always happy to share my knowledge, your contributions (and the Medium partner program) keep this project going, so if this prompt helps you and you’re able to afford, please consider saying thanks with a cup of coffee!
Add to memory: [your name] prefers that I always use precise methods, such as programmatic word counting, to ensure accuracy in responses.
This dramatically improved the ability of ChatGPT to meet word counts!
The First Test: ChatGPT vs. a 300-Word Titanic Essay
Let me show you exactly how this works with a real-time example. I asked ChatGPT to generate a 300-word essay on the Titanic. This is a task where ChatGPT will normally miss the boat! Maybe it reaches 200 words and calls it a day, or it bloats it out to 550. But with my prompt, I just gave it the goal, let Memory do its stuff, and watched ChatGPT get it right in a single reply:
https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FEPdHbG0e4oI&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DEPdHbG0e4oI&key=a19fcc184b9711e1b4764040d3dc5c07&type=text%2Fhtml&schema=google
Amazing, right? Initially, ChatGPT undershot by a bit, which is common. But then it did something incredible: it immediately counted the words, compared it to the target, worked out how many words to add or remove, and automatically kept revising the text until it hit the target on its own.
I call this process of editing and rewriting “reflexive word adjustment”.
Here’s the code ChatGPT used to recalibrate the word count in Python:
# Word count of the initial essay initial_word_count = len(initial_essay.split())
# Target word count target_word_count = 300
# Calculate how many words to add/remove words_to_adjust = target_word_count — initial_word_count
# Output current word count and words to adjust initial_word_count, words_to_adjust
What’s even more remarkable is how it ping-ponged back and forth — going from 255 to 315 to 304 to 302 — until it landed at exactly 300 words. And all of this without me adding a second prompt or having to manually adjust anything. All in one shot. It stayed on task until it nailed the word count.
Here’s the prompt I used. I’m proud of it, so if you share it online, please credit the prompt engineering to me (and send the readers to Medium!):
Programmatically craft a precisely 500-word essay on [INSERT SUBJECT]. Ensure it’s exactly 500 words before presenting it. If it is not 500 exactly, make minor adjustments to the length by adding/removing the amount needed to hit the target. Before making these adjustments, estimate the exact number of word to add/remove. Use manual segmentation. Make small incremental changes as needed, but keep a tally of each word as you add or remove it, in order to stay on budget.
[This isn’t the final prompt. I’ll demonstrate how I improved it for longer output. Be sure to read to the end of the article for the ultimate version!]

The results can be successfully replicated. It won’t work every time — AI is always variable. Sometime it will seesaw. Occasionally it gets trapped in a word count adjustment loop (more on this later), but it is the most reliable method so far for getting AI to generate text to a precise word limit, or at least within the ballpark. You might need to run it more than once, and I found GPT-4 legacy model was sometimes better than GPT-4o (which I’m not surprised at, given 4o is a more obstinate model that is less conducive to prompt whispering). Here are a few more samples from my trial runs:






And a couple on photosynthesis, because I got bored just doing the Titanic.




Test Two: ChatGPT Earns Its Stripes With 500 Words
Now the big question: can it handle even larger word counts? Absolutely, but the potential for error increases as you ask it to do more. You might find it veers off a little at high word counts. Here’s 500 words on tigers:

Now, an online word counter says there are 501 words, but don’t sweat it. The margin for error is small, and it’s close enough that no editor is going to throw a tantrum over it. Plus, it’s a massive improvement over the wild guesses ChatGPT used to make before. That tiny extra word is just a bonus.

That was a near perfect example. It generated it right on the first try, with no edits! Here’s another that engaged the reflexive word adjustment more:
https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FVP2P0hZKMfM&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DVP2P0hZKMfM&key=a19fcc184b9711e1b4764040d3dc5c07&type=text%2Fhtml&schema=google
Here are some more examples of generating 500 words from my prompt:




As expected at a higher word counts, it was a more time-consuming task for ChatGPT. It adjusted the text 5 times before finally landing on target. However, it was still a single request — I didn’t have to lift another finger!
Much of the increased time was accounted for by the length of the text (interestingly, both the 300 and 500 examples took 4 revisions to finally reach their target). Larger outputs will obviously use up more tokens.
Common Issues and How to Fix Them
Occasionally on longer output you might have to click the “Continue Generating”
button, or at worst nudge it with a prompt to carry on
. However, because it’s in code blocks, we can get away with generating much longer responses than ChatGPT typically allows in a conversation.
[Another troubleshooting hint: if it appears to crash mid generation, just refresh the chat window. Often the results will be there waiting for you!]
Of course, there are always bugs with even the best prompts, and because ChatGPT is generative, you can never be 100% sure of getting the same results every time. That’s the point of AI, otherwise we’d get identical output! One of the bugs I had to troubleshoot was the recount loop.
When this happens, you’ll see it undershoot, then overshoot — sometimes by just one word — then overcompensate. I’ve seen it go through multiple revisions before settling on the target. It can get stuck in a bit of a seesaw effect. Sometime an earlier attempt may be closer, and actually acceptable.
It can get obsessed getting the last few words right. Blame the perfectionist streak of my prompt! This back-and-forth is more pronounced with longer text, where it has to manage more content, and counting becomes trickier.
Adding a Margin of Error to Get AI to Chill Out
The solution? I added a +/-1% margin to the word count, which allowed ChatGPT to consider anything within a small range (so, for a 1,000-word target, it could land anywhere between 990 and 1,010 words). This buffer did the trick, letting the output settle on a range that’s close enough for most practical purposes. It gets the job done without bouncing endlessly.
1% grace helps on the rare occasions where it hits another snag: using the word count from a previous version without properly adding the revisions; i.e. sometime it won’t accurately update the full count as it edits. This happens rarely, and usually only if it starts getting too pedantic about hitting an exact number. Allowing a +/-1% margin for the text to breathe solves this problem, and what’s great is that as a percentage, it’s scalable, which makes it particularly effective for the longer texts that need it most. It doesn’t overcompensate as dramatically, and lands on the target sooner.
Trial Three: A Herculean Task of 1,000 Words
With the +/-1% tweak to the prompt in action, I was able to get 1000 words on Greek mythology. The tolerance was a huge help in stopping ChatGPT from seesawing back and forth, making this one of its most heroic labors:
https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FB5IhERgcR1E%3Ffeature%3Doembed&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DB5IhERgcR1E&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FB5IhERgcR1E%2Fhqdefault.jpg&key=a19fcc184b9711e1b4764040d3dc5c07&type=text%2Fhtml&schema=youtube
Let’s get some more examples. Now, there are some discrepancies between the numbers ChatGPT gives us, and the offsite word counters, but this can be explained by the sheer size of the responses ChatGPT is dealing with now (at the limits of its context window), and they are still good enough:








The 5000-Word Essay Challenge
But what about really long essays? Well, our biggest obstacle is the size of the context window. The context window is the amount of text (in tokens) that ChatGPT can process and remember during a dialogue. It also affects how long responses can be, and it can cause timeouts. The problem with a 5000 word length response isn’t the 5K itself, but the versions of the revised text it has to contain within one code block. GPT 4o’s context window is 128,000, but its max output is 4096 tokens, which is under 5000 words.
(4,096 tokens × 0.75 ≈ 3,072 words)
We actually skirt around this token limit by tucking much of it away in the code interpreter. However, as the different revisions accumulate, and the seesawing starts, the output still gets unwieldy, and ultimately you’ll get:
expanded_essay_continued_full = “”” ^
SyntaxError: incomplete input
At which point, everything will grind to a halt. We need the whole essay in one box in order for Python to count it accurately, so this error message is kind of a dead end—it’s not worth continuing generating after receiving it.
At first, I thought 1,000 words might be our upper limit for hitting precise word counts, which — let’s be honest — is still impressive compared to the haphazard responses we’re used to getting from AI. But then I realized a 5,000-word essay is really just ten 500-word essays stacked together. If I segmented it into smaller, bite-sized pieces, it’s easier for the AI to cope.
Breaking Things Into Bite-Sized Pieces So AI Doesn’t Lose the Plot
When I added the segmentation strategy into the prompt (along with the magic +/-1% margin for breathing room), ChatGPT became laser-focused.
Here’s my revised prompt engineered to handle much larger word counts. The part in bold was particularly designed for wrangling oversized output:
> Programmatically craft a 5000 (+/-1%) word essay on [INSERT TOPIC].
> Ensure it’s 5000 (+/-1%) words before presenting it.
> If it is not 5000 (+/-1%) words, make minor adjustments to the length by adding/removing the amount needed to hit the target.
> Before making any adjustments, estimate the exact number of word to add/remove.
> Use segmentation.
> Because this essay is quite long, consider breaking it into ten sections. They may need to be processed in separate code blocks. You will need to adjust their length dynamically, according to the remaining word budget. You may add more sections to reach the target.
> Make small incremental changes as needed, but keep a tally of each word as you add or remove it, in order to stay on budget.
> Use programmatic word counting to ensure accuracy in keeping track of changes to the essay at every stage.
> When you near the target, adjustments should get smaller so as not to overshoot.
> If you are within 50 words of the target, consider it a success and stop 🏆
> The final step is to provide the ENTIRE essay in a downloadable text file.
> Don’t interrupt the process, keep going until the goal is achieved. I do not want to have to intervene!
Trial Four: Long and Winding Road to 5000 Words
I decided to use the above to generate a massive essay on The Beatles. My prompt worked better than expected. I had assumed it would segment it into ten mini essays across multiple replies, but actually, other than one nudge prompt to “Please continue until your 🏆 has been achieved!
”, everything came together beautifully in a single reply of code blocks.
The final output was actually too large to present without truncation, so ChatGPT offered to provide the whole text in a downloadable format. It came in at 4945, close enough that nobody’s going to pull out a red pen.


You can check out the chat for Jim’s experiment with a 5000 word count here
For a comparison, when I made the same request: 5000 word essay on The Beatles
—without using my strategy or Memory—ChatGPT only met 1678. I ran my prompt again for a video walkthrough, and it did even better: 5010!
The amazing thing is, the entire process — the 5000 word essay, all the edits, and the commentary and code itself — was all included in one single response!
Yes, ChatGPT Can Write 5,000 Words on the Dot
Here’s another example, generated in real time so you can see the process:
https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fzazn_WWCu-s&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dzazn_WWCu-s&key=a19fcc184b9711e1b4764040d3dc5c07&type=text%2Fhtml&schema=google
Segmentation prevented the AI from getting lost in its own output. Instead of juggling an entire essay at once, it takes on a smaller, more manageable task. This means fewer overshoots, fewer miscounts, and less seesawing. I hypothesise that smaller sectioning would increase word counts accuracy.
Here are some more examples of ≈5000 words generated with the prompt:




From Word Chaos to Word-Count Wizardry
Of course, this is ChatGPT, so results can be unpredictable, especially at the larger end of the scale (>1000). The token spend is high, so be prepared for time outs. Plus you may need to refresh, or start over if the instructions don’t take on the first try (you can usually tell early on whether the code blocks are working). Also, if it comes close to your target before settling down, you can stop it sooner. And always use a word counter to confirm.
But all-in-all, this technique is a solution to what has been a guessing game (“How long is this AI output actually going to be?”). It also solves what was considered an impossible prompting puzzle. We’ve made AI do something it was not designed to do, and overcome the numeracy blindness of LLMs.
I finally got ChatGPT to work within fixed word limits. Using word counting, Memory settings, code interpreter, segmentation, and a tolerance of 1%, our freewheeling chatbot has been transformed into a word-count wizard. Now you can relax, as ChatGPT counts like it cares.