Exploring AI Art: The Next Step On My Midjourney

Amateur Mortician, Process & Aethetics

Not a month into working with Midjourney, I can see how we still need illustrators. Midjourney is, as yet, not capable of rendering any image a person can imagine. Here’s what I mean.

Let’s start with the kind of description an art director might give to an illustrator: “Setting: Exterior of an ancient Egyptian tomb at night. Subject: An Egyptian mummy carries an unconscious woman in his arms. The mummy is wrapped in bandages with his skull exposed. The woman is dressed like an archaeologist. Style: Gouache painting, like a pulp magazine cover. Color: Contrasting blue and orange. — ar 2:3”

Simple enough. Here’s what it handed back to me.

What part of “carries an unconscious woman in his arms” doesn’t it understand? In none of these images is the woman being carried, nor is she unconscious. Midjourney isn’t even sure what a mummy looks like, or an archaeologist. At least it got the colors right.

Let’s try another: “Setting: A cozy drawing room with antique, wood furniture.  Subject: Dracula and Frankenstein’s Monster sit at a small table, playing chess.”

Here’s the result.

I do not have the patience to explain to Midjourney what Dracula and Frankenstein’s Monster look like. And it seems absurd that I would have to, because both monsters are iconic. I’m not surprised that it doesn’t know what a game of chess looks like, because it doesn’t know how to play chess. That’s for another AI to do.

I know a lot of smart programmers are racing to improve AI and that it will one day be proficient, but for now, I’ll only use Midjourney for simple subjects

Exploring AI Art: Setting Out On My Midjourney

Amateur Mortician, Comics, Process & Aethetics, Web Comics

After experimenting with Stable Diffusion, Dall-E, and Adobe Firefly, I decided to delve deeper into AI-generated art by purchasing a month-long subscription to Midjourney.

The Internet tells me Midjourney delivers the best results, although many consider it the most challenging to navigate due to its Dischord interface.

For my trial run, I chose Dorothy DeCarnage, the enigmatic host of my Webtoon series, ‘Amateur Mortician,’ as my subject. I prompted Midjourney with specific traits: a 50-year-old woman with pale skin, and a slender frame, a wide jaw, narrow nose, and dark eyes. Also her defining feature, a sleek black pageboy haircut.

Portrait of Dorothy CeCarnage.
Dorothy DeCarnage, the hostess with the grossest.

I was immediately impressed by the results. Midjourney got Dorothy within minutes. In about an hour, it synthesized a pile of portraits that were 90% of the way to usable .

Heeeere’s Dorothy!

However, there were occasional hiccups. Midjourney struggled to get her hair right at times, and maintaining her age proved to be a challenge, with a tendency to portray her as younger until nudged to include wrinkles.

Right Age, Wrong Hair

Right hair, wrong age.

Interestingly, Midjourney always painted Dorothy with a…uh… sultry allure without that being included in the prompt. Maybe there was something else in the description that pushed Midjourney to up the hotness.

Nothing about cleavage or a plunging neckline in the prompt.

With plenty of time left in my subscription, I’m eager to see what else Midjourney can do. The next stage will be moving beyond simple portraits to creating complete scenes and situations. I’ll post about that when it’s done.

Working with DALL-E 2

Process & Aethetics

As I’ve said in earlier posts, AI is going to revolutionize the creative process. Already I’ve found a good use for it – designing elements I have neither the knack for, nor a deep interest in.

For instance. space craft. Not my jam. But I wrote a comic story set in space and I needed to whip up a landing pod. I asked DALL-E 2 to give me some ideas and it returned a few option, including this:

Sketch of landing pod spacecraft created by DALL-E.

Cool! Using that as a sketch, I drew this:

Drawing of landing pod spacecraft

DALL-E 2 did what I would have done – scan the internet for images, then cobble them together into a rough sketch. Only DALL-E 2 did it in less than a minute.

Last week, I published the first episode Amateur Mortician on WebToons. I want to drop a story a week and it takes me two to four weeks to draw each one. I still don’t know how I’m going to keep up with that publishing schedule but using AI has the potential to shave a few hours off each story.