Ask a question
OpenAI have just released a public beta for the DALL-E 2 API. This means that you can now generate AI images within your Bubble app and here's how.
So I have a Bubble app here and I have installed the API Connector because we will be making API calls to the OpenAI API. And this one here is for a video we did about AI text generation. But so that this little demonstration here will be fully comprehensive, I'll add it again and I call this one DALL-E.
Bubble API Connector
So what we need to do with APIs is find the right documentation. I've got an account with OpenAI and gone down to guides text generation usage. This provides everything we need to know to set up the API Connector to connect to OpenAI. So we selected this first part here and copy it. And this is going to be part of our call. So I'll name this generate image and paste in the URL there. Now I'm going to select action here because I want this to take place within a workflow. And there are some other sections I need to fill out such as content-type.
And what you can do with the API Connector is just in case I want to add additional calls to this API, I can share headers. So I know that this is likely to be shared across any further calls that I set up. So I paste it in there and then Authorization and we'll come back to the API key in a moment.
Then this is all part of my header section here with the 'h'. And then this is my data (d) section or body section. So I'm just going to copy everything, including the curly brackets, but not including the quotes. Change this to post because then I get a body section paste in there.
Variables / Dynamic text
Let's start making our call a bit more dynamic. So if I put in these triangle brackets, I can name like a variable. And then it gives me this box here. I'm going to untick Private because I want to be able to input into this variable in the workflow.
Then we just look at the other details here and the explainers. So the end value is how many images are returned. And then obviously we have size. I'm just going to take the size down a little bit because then it will work slightly quicker.
How to create an OpenAI API Secret Key
Okay, then we need our API key. So I go over into my DALL-E OpenAI account and click 'Create new secret key'. I copy my API key and I blurred out here, but you put it into the box and pasted in your API key there
Then all going well the next thing is to initialise the call and we'll then see if the parameters we've entered are correct and whether the call is successful.
But I need to put prompting first. So, I mean, you can have great fun with this. Let's go with 'a polar bear eating an ice cream on the beach'. Let's try that.
Remember to add 'Bearer' to your API call
Okay, so my Authorization header hasn't gone through. I reckon I know what it is, but I'm just going to cheque the documentation. Yes, I need to put before my API key 'Bearer', which is very common. Let's try that.
It's taking some time. That's good.
And so we get back just a couple of fields here, including a URL. And I would bet that this URL is going to show us our image. So if I just copy that, open a new tab.
There we go. Polar bear on the beach eating an ice cream so it's work. So I click save because now within the API Connector and Bubble, everything set up and working.
Front end AI image generator
So how would I do this on the frontend?
So I have a blank page here and we need like a multiline text input for our prompt. We need a button and we need an ability to display an image and then somewhere to store that image. Now I'm going to try Custom State it. So I'm just going to avoid confusion. Let's put it on the page. Custom States, I'll just call it returned image. Image. And then Dynamic Image is going to be page's Custom State image. Okay, let's wire up our workflow then generate AI image. So here we go. Let's go for now, because we had as an action, I should be able to find it wherever I named it. So it's going to be DALL-E Generate image. There we go. In case that's a little bit confusing, that is referring directly to the name that I gave it here. Because it is an action, if you use data, you're not going to be able to access it in the same way within a workflow.
And then we need to save the image that's returned. So DALL-E demo returned image results. Step two, data items URL. Okay, so if we go back and look at the response we get, the URL is within data. But of course with DALL-E you can request more than one version. It could give me five different polar bears on the beach. So it's assuming that we're getting back a list. So that's why I need to go of each items URL, first item. So that will just return the first item. The first URL is returned. Let's preview that. Let's make this a little bit tidier too. We shall reset inputs. There we go. And I need to set up the prompt, otherwise we'll just get another polar bear back. So this is my multiline input.
Testing front end AI image generator
Okay, let's try something different. Let's try a house cat and a robin sharing a coffee in Paris. Okay, not quite what we asked for, but that's why you can get all those different versions back.
Now, you notice that it takes some time. I mean that's because there is a huge amount of computing power going on to generate the image.
Using OpenAI with users - warning
So let's recap. We have DALL-E API now working with Bubble using the OpenAI API. The DALL-E API is in public beta at the moment. You'll want to read really carefully through their documentation, such as their content policy. If you were to build this into a fully fledged application that other users will be using, you'll want to put steps in place to protect your OpenAI account from prompts that will be violating their terms.
So there we have it. This is really such an exciting thing. You've seen probably seen so many people on social media sharing the images that they've generated, and now you can build an application just like that in Bubble.