Ask a question
So you've added the OpenAI API to your Bubble app, and now you're getting a message like this. This model's maximum context length is 4097 tokens.
The Challenge of Token Limit in Building ChatGPT Applications
This can be a limit that's really easy to hit, especially if you're building a chat app where you're having to send all of the previous messages in that conversation along each time. It can grow very easily. Or you're building an app where you're writing or re-analyzing, re-writing huge blog posts. You're going to hit the limit of how many tokens you can send, and there's a very easy fix for it, which is that GPT-3.5 turbo is available as GPT-3.5 Turbo 16K. So that is four times the amount of content that you can send in your whole API call.
Using the GPT-3.5 Turbo 16K To Expand Your Token Limit
And it's really quick to add this in. You just copy and paste the model name. And where you've got model here, GPT 3.5 turbo, you would paste in the GPT 3.5 turbo 16K. And there you have it. You've expanded the number of tokens that you can send with the API call. Now, if you're learning Bubble and you like our channel, we'd really appreciate a subscribe and a like on this video.
And if you're on that Bubble journey and you just wanted to consume more Bubble educational content, you can find even more videos that you cannot find on our YouTube channel. You find them only at planetnocode.com and become a member there to unlock all of the videos we've ever made.