Limited Time Offer: Watch All Our Bubble Tutorial Videos for Only $99 per Year!

Combining multiple AI prompts with UsePlumb.com

Streamline your AI-powered no-code app development process with Plumb's AI pipeline builder tool, making it easier to incorporate multiple AI features seamlessly into your Bubble.io app. With structured JSON outputs and efficient testing capabilities, Plumb is a game-changer for simplifying complex workflows.

Building AI-Powered No-Code Apps with Bubble

If you're building an AI-powered no-code app with Bubble, then you need to check out useplumb.com because they can help you build, test, and deploy AI features with confidence because they've got this amazing pipeline tool that you can bring together lots of AI-powered nodes to create workflows that simply wouldn't be possible in a Bubble app alone.

As an example, here is a really simple pipeline combining two AI requests to two separate AIs, combining them in the same output and returning that. And it looks like this in my Bubble app. So it's a content generation AI that I built. I'm going to demo from a blank canvas how I built it in this video. But we get back some lovely structured JSON.

Advantages of Using JSON with Bubble

So no more having to use split by to identify different parts of the response from OpenAI. We get our title, we get tags, we get content, we get tweets, and we get our tweets in a nice list in the JSON. So let me show you how I've gone about and built this by diving into the Plumb pipeline builder. So here's where we start in the Plumb pipeline builder. I've created a new pipeline called content generator and the first thing I'm going to do is set my inputs.

Setting Inputs in Plumb Pipeline Builder

So I'm going to have, let's have topic because I'm going to feed in a topic, I'm going to feed in some keywords, and I'm going to get a series of AI responses out all back in a single call while pointing out as I go some of the really cool features that Plumb offers. That makes this easier than if you were building the pipeline or the workflow in Bubble alone. So we're going to have topic and we're going to have keywords and they're both required. Let's make a bit of space. Then I'm going to go into AI prompt and drag in the text LLM and I'm going to rename this to generate blog and hit save and link the two up.

Linking Data in the Plumb Pipeline

So when you pass data through a Plumb pipeline, you have to have a direct link between the two. So in order to access my topic and my keywords, I need to create a branch going between them. So now I can write my prompt in here so I can say write a SEO-focused blog post about and then I can insert my topic, try and include the following keywords and then I can insert my keywords. And now this is where I'm going to find it really easy to begin to demonstrate how Plumb helps out getting the right prompt with the right AI model, just so that you get the, the right response, basically, because it's dead easy to swap between the different providers. Now, when you're testing it at the moment, Plumb provides their own keys behind the scenes, but when you deploy your pipeline to production, you will need to supply your own keys for the different providers.

Testing with Different AI Models

But I don't want to have to wait for GPT-4. I'm going to say let's use GPT-3.5 turbo and I can test it. So let's test it. I'll say topic building AI, no-code apps with Bubble keywords that say OpenAI chat, GPT, and Claude.

The Power of Testing with Plumb

And it's now going to run the pipeline, even though I've not completed it. And this is part of the power of Plumb, is the ability to test, revise, revisit different parts of the pipeline. So now let's run the next step. And part of a frustration from even when we first started using GPT 3.5 turbo with Bubble, but particularly with using GPT-4, is the fact that if you ask for a lot of content, here we go.

Generating Long-Form Content

Here's a blog post. It can take a really long time for the AI to respond. Now, I'm going to do a completely separate video on this, but just as a kind of tease of what you can also do, you can set up a webhook at the moment with the return response. The loading bar goes across the top of your Bubble app. Your user is waiting for the response, at least if you run it in a front-end workflow.

Using Webhooks for Large AI Responses

But by using a webhook, you can basically think, well, actually, it's going to take 15 minutes to generate all this AI content. The Bubble app is going to timeout waiting for a response. I can use a webhook in order to send the data through to my Bubble app when the data is ready. Now, let's actually update this because that's a little bit unstructured. I don't want raw text, I want a structured output as a record.

Structuring AI Responses

Because I want to have. You can see I've had a few goes at this. I want to have a blog post title and I want to have the blog post content. And actually both of these I want to have required.

Creating Structured Records in Plumb

Oh, no, no, that's not right. So the record is the blog post. Blog post. And then I'm going to have blog post title and add another property blog post content. And this is all going to go in behind the scenes into the prompt that Plumb constructs for you and sends off to, in this case, OpenAI.

Descriptive Property Naming for Prompts

So it does help to think descriptively about the property names and the descriptions that you use because we can just now rerun that because at the moment this is, you know, there's no structuring involved. It just says title, Colon. But now we've got, if I expand, is that going to help? Yeah, there we go. Now we've got different fields or different parameters, different keys for our output, which is really helpful because that allows me to structure it more as we go on.

Adding Additional Steps in the Pipeline

Right. Let's add in another LLM step because I also want some tweets. And I'm going to demonstrate how we can create that nice list where it's automatically divided up already in the JSON. So I'll say generate three tweets and I'll say, right, three tweets about. And I'll just insert the topic.

Generating Tweets with AI

I won't bother with the keywords. Now we can use structured output and I can say list. So I'll say list of tweets.

Configuring Structured Outputs for Tweets

And yeah, I'll just have write a tweet on the topic. And I set that in. Now I want this to be really quick. So I'm going to change it to anthropic and change it to haiku. And let's run the whole thing from the top once more.

The Final Output

And there we go. There are our three tweets separated out in the JSON that's replied. So what we've got here is two separate AI calls and we're now going to combine them into a single response. So to do that, I need to use a data processing mode and I think it is compose. Move this down here and so on, compose.

Combining AI Responses

I'm going to rename it and call this one combine responses. So you could branch this off and you could have five different LLM prompts going off and waiting for responses. And so I bring these into here.

Structuring the Combined Response

And now I build the structure of what I want to respond to my Bubble app with when the AI, the endpoint plumbing pipeline endpoint is run and data is sent to. So I'm going to have a blog post and that's got a few properties. I've got title and I've got content. And then I'm also going to have tweets.

Adding Properties to the Combined Response

And there are no properties on tweets. Let me actually add that in again. Tweets.

Linking the Data for Final Output

Okay. And then I'm going to link up to the response. And now the response, I say, well, what data do I want sent back to my Bubble app? Well, I don't just want it to be empty, I want to send back a step and I'm going to send back the combined responses step and I want to send back everything in it. So let's test the whole pipeline from the top.

Customizing Models for Performance

And I should say, as well as being able to deal with the weight by using a webhook instead of return response, you could easily go in here and you could easily change the model. Because if I thought GPT-3.5 is too slow, or I just want to test in the midst of my workflow where lots of, sorry, pipeline where lots of LLM data is coming together, I just want to swap it out and try another model. And you can do that in and just imagine how much time can be saved in Bubble because you might have to otherwise run this all in a backend workflow where you wouldn't be able to test individual bits easily revise. And Plumb is here to save you that time. So here's the response.

Final Testing and Response in Bubble

And yep, we got our tweets and we got our blog post. Right, final part of the video, how do we plug this all into Bubble so we can go deploy? And I'm going to skip straight to production and copy this endpoint.

Deploying the Combined AI Response in Bubble

And I already have my Bubble app sorcerer set up to run this. So I've got my, I've got Plumb, if this is all familiar to you, if you've ever worked in the Bubble API connector. So I'm just going to replace the endpoint and then check that I've used the same keys. Topic and keyword. Topic and keywords.

Testing the Integration in Bubble

That's what I've written there, keywords there. Fine. Cool. So let's give it a try. Now, there's one bit that I missed and caused the workflow to fail, which is that on the tweets here, if I add this back in, I'll say tweets, and then generate three tweets.

Ensuring Proper Data Mapping for Success

And then I just have to say no properties yet, because now if I run it, it's going to work. Okay, there we go. It has worked. We've got a response into Bubble and we've got all of this beautifully structured JSON data based on our combination of multiple LLM prompts using different models. And Plumb has been there to save the day, save us time, save us debugging time, especially.

Conclusion: Benefits of UsePlumb

So that's a really simple demonstration of what you can do with Plumb. I'm going to record a video demonstrating how to use the webhook feature for if you're having to wait a really long time for an LLM to respond, but there's so much that you can do with Plumb. I'd recommend heading over to useplumb.com. In fact, we'll have a link down in the description and reaching out to the team, because they've been working closely with me and they've done some amazing demos of pipelines that just blow your mind of what is possible, including things like transcription into an AI, separating out different speakers. It's amazing.

Latest videos

crossmenu