LTX 2.3 Studio
Welcome to LTX 2.3 Studio: an all-in-one AI video generation SaaS built on LTX 2.3. Turn a single sentence, image, or audio clip into deliverable video content in minutes. LTX 2.3 improves detail sharpness via an updated VAE and latent space, and enhances prompt adherence and motion consistency — making outputs feel truly ready to use. Start creating short videos, ad storyboards, product demos, and prototype animations with LTX 2.3 today.


Get Started
LTX 2.3 Workflows
Type 1: Text-to-Video (T2V)
Enter a prompt and duration — LTX 2.3 generates the video directly, with the option to simultaneously produce native audio (ambient sound, dialogue, or effects). The new text connector makes multi-subject scenes, spatial relationships, and style instructions easier to execute correctly.


Type 2: Image-to-Video (I2V)
Upload a keyframe and LTX 2.3 brings it to life — with fewer common issues like frozen frames or slow-push (Ken Burns) artifacts, and improved real motion and consistency. Ideal for e-commerce hero animations, animated posters, and character shot tests.


Type 3: Extend & Retake
Use LTX 2.3 to extend an existing clip or re-generate a specific segment, reducing rework costs while maintaining shot continuity. For longer narratives, generate short segments first and extend them step by step.


User Stories
What Our Users Are Creating
Real stories from our community
Authentic experiences and creative works shared by users worldwide
Join thousands of creators using our platform
Loading user stories...
Want to see your work featured here?
Share Your StoryCore Capabilities
Why Choose LTX 2.3?
LTX 2.3 is optimized for production-ready output: sharper, more obedient, better in motion, with cleaner native audio and native vertical video. Think of it as the foundation engine for your video content pipeline.
- Sharper Detail & Texture
- LTX 2.3 uses an updated VAE and latent space to improve fidelity in textures, hair strands, and edges — reducing reliance on post-production sharpening.
- Stronger Prompt Understanding
- LTX 2.3 upgrades its text connector for more stable handling of complex prompts — multiple subjects, spatial relationships, and style constraints — with less drift or off-topic output.
- Cleaner Native Audio
- LTX 2.3 brings fewer artifacts and audio dropouts via cleaner training data and a new vocoder. Supports synchronized audio generation and audio-driven video workflows.
- Vertical & Multi-Modal in One
- LTX 2.3 supports native vertical output up to 1080×1920 for short-video platforms, and covers text, image, and audio-to-video workflows — as well as extend and retake.
Frequently Asked Questions
- What is LTX 2.3?
LTX 2.3 is one of Lightricks' open-source video generation model releases. It supports text-to-video, image-to-video, audio-to-video, and more — with native audio and vertical video capabilities.
- What improvements does LTX 2.3 bring over previous versions?
LTX 2.3 focuses on better detail (new VAE/latent space), improved prompt understanding (larger text connector), more consistent motion in image-to-video, cleaner audio, and native vertical video output.
- How long can LTX 2.3 generate in a single run?
On some hosted inference services, LTX 2.3 can generate up to approximately 20 seconds in a single generation, and you can continue extending clips using the extend workflow.
- What frame rates and resolutions does LTX 2.3 support?
Options like 24/48 FPS are available via supported API and hosted endpoints, with higher resolution outputs possible. Vertical video can reach 1080×1920 — natively trained, not cropped.
- What are the recommendations for commercial use and deployment?
LTX 2.3 is noted as Apache 2.0 licensed on some platforms, making it available for commercial projects. If using a specific desktop product or distribution, be sure to also review its commercial license terms.