
Making blog thumbnails and images is tedious. I don't want to fight with Gimp or whatever and I don't think anyone cares what the image really is.
To make publishing easier, this site now checks for the image in each markdown post during the build to see if it already exists. If not, a prompt is built from the post title and content and sent to OpenAI's image API. The returned art is saved alongside the post and a 70×70 thumbnail is produced using Sharp.
To make the image for this post it cost me about $0.12 CAD in OpenAI credits. Entertainingly that's probably more than what hosting this site will cost for like a week due to the setup I'm using to host this.
This automation was initially coded with Codex, but it didn't work quite right. I fixed it up with Cursor, then a bit manually to get it across the finish line. This was as much about playing with the AI dev tools as building the feature.
Check it out here generate-images.mjs.
The image generation system is integrated into the build pipeline through the package.json build script:
"build": "node scripts/generate-images.mjs && next build"
Before Next.js builds the static site, the generate-images.mjs script runs and:
markdown/posts/ for front matterimage and thumbnail fieldsOPENAI_API_KEY is setIt's all pretty straightforward:
High level script logic:
Generating the image is also simple:
imgprompt in it, use that as the summary of the post.imgprompt is not present get gpt-4o-mini' to write a summary of the post.dall-e-3 to create the image.At time of writing (I won't keep this updated, check the code), I used these for the prompts:
Making the summary:
async function summarizeBlogContent(data, content) {
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const prompt = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: `In max 20 words, describe a simple logo image that would represent this blog post: ${content}` }],
});
const summary = prompt.choices[0].message.content;
console.log(`✨📝 Created summary of "${data.title}" → ${summary}`);
return summary;
}
Making the image
async function getPromptFromContent(data, content) {
const subject = data.imgprompt
? data.imgprompt
: await summarizeBlogContent(data, content);
const prompt = [
"Simple minimalist flat vector icon.",
`Subject: "${subject}".`,
"Style: clean flat SVG-style vector, crisp edges, no gradients, no shadows, no textures, no lighting effects.",
"Colors: limited palette, high contrast, modern minimal. Black, white, and primary colors only.",
"Composition: centered subject with generous whitespace, 1:1 icon framing.",
"Background: pure solid white (#FFFFFF) and completely filled. Nothing around the logo.",
"No transparency. No checkerboard. No grid. No background elements.",
"No text. No words. Just a simple image centered in white space."
].join("\n");
return prompt;
}
Set your OpenAI API key
export OPENAI_API_KEY="your-api-key-here"
Without this key the build just skips image generation with a warning.
Dalle-3 is super frustrating when it comes to just making a white background. I spent a couple hours trying different prompts and it just refuses. Here's what the code came up with as a prompt for this page:
Simple minimalist flat vector icon.
Subject: "A simple logo featuring a stylized laptop and a paintbrush, symbolizing effortless content creation and automated design.".
Style: clean flat SVG-style vector, crisp edges, no gradients, no shadows, no textures, no lighting effects.
Colors: limited palette, high contrast, modern minimal. Black, white, and primary colors only.
Composition: centered subject with generous whitespace, 1:1 icon framing.
Background: pure solid white (#FFFFFF) and completely filled. Nothing around the logo.
No transparency. No checkerboard. No grid. No background elements.
No text. No words. Just a simple image centered in white space.
It does not seem to care for my preference. If anyone has any tricks for this I'd love to hear about it.