Limited Time Offer: Watch All Our Bubble Tutorial Videos for Only $99 per Year!

Welcome to part three of our Bubble tutorial series! We are delving deeper into the OpenAI Assistant API with threads and the Bubble API connector in this episode. I appreciate your patience and anticipation for this part, as I've been receiving a deluge of requests asking about it.

The delay was intentional from my end. I've spent a good amount of time trying to devise a simple and elegant solution to ascertain when a message becomes ready in OpenAI. Unfortunately, the quest for an "ideal" solution has been elusive. However, in the spirit of sharing and continuous learning, I've decided to present you a working solution along with its pitfalls in this video. Let's dive into it.

The OpenAI Assistant API in Bubble: A Quick Demo

For demonstration purposes, let's create a new chat. Asking "What is your role?" returns a message and an OpenAI response. Now, we are brought to face a very critical question. If your application is functioning well without the Assistant where your API calls receive immediate responses, should you switch to the OpenAI's thread management?

I would seriously urge you to consider this unless there is a strong requirement for you to outsource thread management to OpenAI. Let's explore why by delving into the Assistant API through the Bubble editor.

Unpacking the Assistant API in Bubble

The Assistant API is not the most elegant solution when it comes to handling thread updates. Currently, a 'do every one second' action refreshes the contents of the Repeating Group. Moreover, we don't receive a notification upon the completion of a 'run' action.

The only way to know that a run has completed is by repeatedly checking the run object or using an API call. Both of these measures can escalate your workload units and churn through them rapidly. Let's consider the logs for a better understanding.

In testing, when I added checks and balances with a conditional statement, only running this when the run object status completed, the most workload units were consumed. Let's contrast this to the case where the API call was made once every second, the consumption of workload units was lesser. However, tread carefully, each application use-case might differ.

If you wish to use the message method, which I consider sub-optimal, you need to understand that Bubble caches the results of a 'get data from external API'. Consequently, quick and repeated run commands may not result in Bubble updating it. To counteract this, you signal Bubble that the data has changed through an additional header named 'date' in the list messages request. This unique timestamp allows Bubble to acknowledge each API call as unique, thus updating it every second.

Pitfalls and Limitations of OpenAI's Assistant API

Despite the workarounds, this method still isn't ideal. Offloading threads management to OpenAI might seem appealing since it stores your messages for you. However, the process is arguably more complicated than what we've demonstrated in our previous videos.

Additionally, the Assistant API currently has its limitations and shortcomings within Bubble. We must remember that this API is still in beta. While we can't expect drastic changes, updates can still result in differences in how the API functions.

For a smoother user experience, we would need an update from OpenAI that allows you to enter a webhook URL while submitting the 'run' command. This API response would notify your Bubble app once the process is completed, enabling you to request the text generated by OpenAI. The same procedure is adopted by services like Assembly AI, a speech-to-text API.

When to Consider Using the Assistant API

I would strongly advise revisiting your decision about switching to the Assistant API, especially for large conversations or where large data processing knowingly causes timeouts. An informed decision based on your specific needs and issue scenarios would be beneficial.

Consider the cost in terms of workload units. In my testing, it roughly translates to six workload units per minute to run the page. You need to evaluate this cost and determine if it is worth it for your specific use case.

Concluding Thoughts

Before ending this video and tutorial, I must confess that the solution demonstrated here is less elegant compared to the ones in previous videos. However, given that the OpenAI Assistant API is in beta, we decided to show you a working solution, despite its imperfections.

It is important to consider your own needs and constraints before deciding whether to use the Assistant API or stick with the chat endpoint. Further testing and optimization will allow us to better understand and deal with the shortcomings of currently using the Assistant API in Bubble.

We will be watching the updates on OpenAI API closely. Hopefully, in the future, we will be able to present a more efficient solution using OpenAI's latest features. Stay tuned, the journey still continues.

Latest videos

lockmenu