• Thelsim@sh.itjust.worksM
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I guess I can understand the confusion :)

    Since the original picture was made with Bing (which uses DALL-E to generate the images), I thought I might be able to reconstruct the prompt by describing elements of the image to Chat-GPT (which also uses DALL-E).
    I described the original sketch and the description that came with it, then I kept adding stuff. I told it that the final image is in color, that the grove is set during dusk, that the creature is relaxing in the pond, holding a glowing ball, etc. And every time it would alter the prompt to fit all those elements.
    Finally I wanted it to send that prompt to DALL-E and generate the image. Which is where it all went wrong. Ever since an “upgrade” DALL-E seems to be having trouble with generating images. I got very annoyed and tried simpler and simpler descriptions, but it just didn’t budge. In the end I just made Chat-GPT write a summary of everything we tried to accomplish and a little apology for not managing to generate the image. That way, I felt like I’ve accomplished at least something :)

    I’m sure there are more “efficient” ways of doing this, but this is a lot more fun :)