GPT-3.5 released, content companies reeling
Plus: how to use AI to mock up website landing pages in seconds
Welcome to The Cusp: cutting-edge AI news (and its implications) explained in simple English.
In this week's issue:
- AI generates incredible quality landing page designs in seconds.
- GPT3.5 released: longform (+500 word!) content is now just a few clicks away.
- Stable Diffusion API now available to developers.
Let's dive in.
AI can now generate incredible-quality landing page designs in seconds
Midjourney, a popular diffusion-based image generator, is usually used to make breathtaking art like this:
But Marcel Pociot recently demonstrated that the same technology can be used to create high-quality, custom landing page designs in a fraction of the time.
And it's (perhaps shockingly) very easy to do.
Prompts as simple as "backup website landing page, flat vector, Figma, dribbble, user interface" offer stunning results despite their simplicity:
The implications for marketers, web designers, and product teams are clear: you can now create high-quality landing page designs with minimal effort, and it's only going to get easier over the coming months.
How can we take advantage of this?
AI-generated landing page mockups aren't destroying any industries (yet). The images clearly still have issues: illegible text, JPEG artifacts around buttons, etc.
But that doesn't mean they can't significantly improve your workflow. For instance:
- Marketers now have the power to quickly spin up their own landing page mockup with a bit of creative input.
- They can then use this mockup to quickly test the effectiveness of different configurations & layouts with their design team.
This speeds up the creative process significantly. Instead of waiting hours or days to get a mockup back, your marketing team can achieve the same results in a few seconds.
Designers also benefit:
- AI-generated mockups provide them with an incredibly fast way to test out different design ideas and quickly get feedback from stakeholders.
- All a designer needs to do is generate a set of styles, smart erase the incoherent text, and then tweak the design as needed for good-enough results.
Most PPC, website, and design agencies could probably double their output by implementing this intelligently.
GPT3.5 released: incredible content is now just a few button strokes away
GPT-3 was a groundbreaking text-generation model, capable of producing short articles, coherent stories, and engaging outlines on a variety of topics.
Now, OpenAI has added text-davinci-003 to the repertoire—what they're calling "GPT-3.5"—and its performance is even better.
Trained using a reinforcement-learning paradigm, GPT-3.5 can generate entire articles between the 300-600 word mark in a matter of seconds, with minimal edits required.
Don't believe me? Here's ~400 words generated in one click (no cherry-picking):
It's not going to win you any Pulitzer Prizes, but it is going to save you massively on content expenditures.
How can we take advantage of this?
Odds are, if you're looking to take advantage of GPT-3.5, you've already been taking advantage of GPT-3. Not much will change on that front.
Of note, though: the difference in consistency here is at least an order of magnitude better:
- Whereas GPT-3 quickly lost coherency at longer token lengths, GPT-3.5 retains it. The context window is now a whopping 4,000 tokens!
- Many of the problems initially attributed to Byte Pair Encoding, like rhyming, cadence, and poetry, appear to have been resolved (or at least minimized).
- You get much better zero-shot results with minimal prompting.
If I had carte blanche at a major content company, here's what I'd do to massively improve throughput:
- I'd use Google Sheets or Airtable to organize the company content calendar.
- I'd create a Custom Function that calls GPT-3.5 with a title, a brief, some keywords, and a prompt like "Write an engaging article with the following information".
- For every new row, I'd pre-generate 4-5 drafts using GPT-3.5 in the Sheet itself.
- My content team would now have ~70% of their work done. Then, they'd pick the best draft, mix/match paragraphs from the other ones to lengthen the piece, and (as a final step) add links/images/etc.
The above would decrease costs by, at the very minimum, 3-4x while increasing production by a similar amount.
Stable Diffusion API now available to developers
Stable Diffusion, the open-source diffusion-based image generator, recently released its API.
This means that developers now have an easier way to generate images using Stable Diffusion's technology—and with it, a huge range of possibilities has emerged.
How can we take advantage of this?
Naturally, power users will have already downloaded, fine-tuned, and generated images on their local hard drive.
But with a standardized endpoint now available to everyone else, the playing field has leveled. You don't need to be a deployment/operations professional to get value out of Stable Diffusion anymore (nor do you need access to expensive GPUs).
Here are 4 off-the-top-of-my-head ways you might use Stable Diffusion's API in your career or company:
- Commission artist: use few-shot GPT-3.5 to translate incoming commissions into Stable Diffusion-ready prompts, and then route that to the Stable Diffusion API. "I'd like a gothic-style painting of a woman in a red dress with her dog" might become "Woman, red dress, holding dog, gothic-style, 1860, high-quality, artwork, Greg Rutkowski", for instance.
- Enhancing existing designs: use Stable Diffusion as an upscaler to enhance pre-existing images and artwork. Make logos crisper, old photos brighter, and otherwise improve existing designs.
- Generating textures: use Stable Diffusion to generate detailed, realistic textures for games or videos. Perfectly tileable textures can be generated with a single API call!
- Image search engine: I covered stock photography last week, but you can now easily create an image-based search engine (without the complicated backend). Hook up a search box to the Stable Diffusion API & you're good to go.
Ultimately, image-generation APIs like Stable Diffusion will fundamentally change the way we create and consume media. In a few years, every image request on the Internet will probably include a parallel request to an AI image service, for customization, enhancement, or compression purposes.
I, for one, can't wait.
That's a wrap!
Enjoyed this? Consider sharing with someone you know. And if you're reading this because someone you know sent you this, get the next newsletter by signing up here.
See you next week.
– Nick