Image Studio AI

Productizing Generative AI for Production

I led the end-to-end system design, building a structured control layer on top of a probabilistic model.
The introduction of controlled generation workflows reduced iteration friction and improved enterprise trust post-launch.
Impact
+0%
MAU Growth (MoM)
+0%
New Users (MoM)
+0%
New Users (MoM)
+0%
New Users (MoM)
0min
Avg Usage per User
0min
Avg Usage per User
0
images/user
0
NPS
0
NPS
0
NPS
0
NPS
The Real Problem

AI is probabilistic.

Two broken paths, zero good options.

Production requires predictability.
Food brands struggled with:
  • Output variance
  • Prompt dependency
  • No reusable brand memory
The gap wasn’t image quality —
it was operational reliability.
The Real Problem

AI is probabilistic.

Two broken paths, zero good options.

Production requires predictability.
Food brands struggled with:
  • Output variance
  • Prompt dependency
  • No reusable brand memory
The gap wasn’t image quality —
it was operational reliability.
DISCOVERY

My Approach: Build Control Around the Model

Instead of:
Prompt Generate Download

1

Structured Input

2

Iterate

3

Save

4

Reuse

1

Structured Input

2

Iterate

3

Save

4

Reuse

1

Structured Input

2

Iterate

3

Save

4

Reuse

Problem Statement

Food brands and creators need to generate large volumes of high-quality food visuals, but existing workflows are expensive and slow, and generic AI tools create inconsistent results that require heavy prompting and rework.

Food brands and creators need to generate large volumes of high-quality food visuals, but existing workflows are expensive and slow, and generic AI tools create inconsistent results that require heavy prompting and rework.

Design Goals × Metrics

How We Measured
Success

1

Reduce production time

Metric: Time-to-first-usable image ↓ 60% Signal: Users export/save within the first 3 generations

2

Improve brand consistency

Metric: Consistency score ↑ from 2.6 → 4.0 / 5 Signal: Fewer “style reset” regenerations per asset set

3

Make AI usable for non-experts

Metric: Prompt edits per generation ↓ 40% Signal: Users rely on UI presets vs free-typing prompts

4

Enable scalable production

Metric: Multi-asset creation per session ↑Signal:Users generate 5+ images/session for a campaign pack
Solution (Workflow-First AI Studio)

Replace prompt-heavy input with structured visual controls

Instead of relying on free-text prompts, we exposed key food photography variables as explicit UI controls,
These variables are what actually drive visual consistency. Making them explicit reduces ambiguity and lowers the learning curve for non-expert users.

Enable user-created presets through reusable settings

Rather than providing system-defined presets, the beta allows users to turn a successful configuration into a reusable baseline via “Use Settings”.
Brand teams care about repeating their definition of “good”. User-created presets preserve consistency without enforcing assumptions too early.

Threaded history with one-click reuse & regenerate

The generation history is designed not only for review, but for fast reuse and regeneration.
In creative workflows, restarting is expensive.
By allowing users to reuse and regenerate directly from history, we turn past success into the fastest path forward.

Mode-based workflow switching

Users can switch modes instantly without leaving the page or losing context.
In real production workflows, teams move fluidly between generating, refining, and branding assets. Mode-based switching reduces friction and prevents users from restarting tasks in separate tools.

Integrated Product Placement for brand-ready visuals

Users can insert branded products into generated visuals without leaving the generation context, instead of exporting images to external editing tools.
Brand and commerce teams often need visuals that include specific products.
Embedding product placement into the same workflow reduces handoff friction and shortens the path from generation to publishable assets.

Before / After comparison (Comingsoon)

To help users evaluate AI outputs more efficiently, we introduced before / after comparison between the reference image and generated results.
ComingSoon
In creative workflows, decision-making is often the slowest step. Side-by-side comparison reduces cognitive load and shortens iteration cycles by making differences immediately visible.

Smart Style Guideline Generation from Uploaded References

To help brand teams scale visual consistency more efficiently, we explored a solution that allows users to upload existing photo guidelines (images or PDFs), which the system can then translate into structured, usable style guidelines.
ComingSoon
Many brands already have established visual standards. Translating existing guidelines into machine-readable controls significantly reduces setup cost and accelerates adoption for enterprise users.
Results & Impact

Deeper User Engagement

These signals suggest that guided controls, reusable settings, and integrated workflows effectively reduce iteration cost and support real-world content production.
0123456789.0123456789mins

Avg. Total Usage Time / User

0123456789.0123456789mins

Avg. Total Usage Time / User

0123456789.0123456789mins

Avg. Total Usage Time / User

0123456789.0123456789

Avg. Images Generated / User

0123456789.0123456789

Avg. Images Generated / User

0123456789.0123456789

Avg. Images Generated / User

0123456789.0123456789
0123456789.0123456789

NPS Score

(Users rate the product as easy and valuable to use.)

NPS Score

(Users rate the product as easy and valuable to use.)

0123456789.0123456789%

MAU Growth

0123456789.0123456789%

MAU Growth

0123456789.0123456789%

MAU Growth

0123456789.0123456789

Avg. Sessions / User

0123456789.0123456789

Avg. Sessions / User

0123456789.0123456789

Avg. Sessions / User

More Works

Create a free website with Framer, the website builder loved by startups, designers and agencies.