The creation of hyper-realistic fake videos — popularly known as deepfakes — has become one of the most striking advances in modern artificial intelligence. But what is the technical name for this technology when it is applied to pre-existing videos? And how do these systems actually work?

In this article, we explore the correct terminology, the internal mechanics, and the main methods used to generate high-precision deepfakes.
The Technical Name: Face-Swapping with GANs
The most widely accepted technical term for AI that generates deepfakes from existing videos is:
Face-swapping using Generative Adversarial Networks (GANs)
or simply
Deepfake generation using GAN-based models
These expressions frequently appear in scientific articles, academic papers, and technical discussions on video manipulation through AI.
Related Technical Terms
When we analyze the technological ecosystem behind deepfakes, several complementary expressions are used to categorize specific techniques:
- Face Reenactment
Replicating one person’s facial expressions onto another person’s face while preserving the original performance. - Neural Rendering
Neural-network-based rendering techniques applied to altered faces, movements, or visual elements in video. - Video Synthesis
Creation or modification of video using visual data processed by AI models. - Identity Swapping
Full or partial replacement of the visual identity in a video, including face structure, traits, and lighting.
How Does AI Create Deepfakes?
Deepfake technology is fundamentally powered by GANs (Generative Adversarial Networks) — AI models composed of two neural networks that compete against each other:
1. Generator
Creates synthetic images of the target face.
2. Discriminator
Attempts to detect what is real and what is artificially generated.
The continuous adversarial training between these two networks forces the model to produce increasingly realistic results, until the manipulated video becomes nearly indistinguishable from the original.
In addition to GANs, some modern systems also use:
- Autoencoders
- Convolutional Neural Networks (CNNs)
- Transformer-based video architectures
- Motion transfer models
Key Frameworks and Technologies Behind Deepfakes
Several widely used tools and frameworks power both research and practical applications:
• DeepFaceLab
The most well-known software for advanced deepfake creation.
• FaceSwap
An open-source tool for face replacement.
• First Order Motion Model (FOMM)
Enables animating static images using reference video motion.
• StyleGAN
NVIDIA’s groundbreaking generative model for highly realistic imagery.
• Avatarify
A real-time app that maps facial movements to another face.
• Face2Face
One of the earliest systems for interactive facial manipulation in video.
Legitimate Applications of the Technology
Although deepfakes are often associated with risk and misuse, the underlying technology also powers several beneficial applications:
- Cinema and visual effects
- Facial dubbing and lip-syncing across languages
- Preservation of historical performances
- Synthetic data generation for AI training
- Hyper-realistic avatars for VR and metaverse environments
Ethical and Security Challenges
The rapid evolution of this technology raises important ethical considerations:
- Verifying the authenticity of video
- Misuse for defamation or fraud
- Creation of non-consensual content
- Political or social manipulation
For this reason, many companies are developing deepfake-detection systems, using AI techniques designed to do the opposite: identify synthetic video artifacts.
Conclusion
When discussing AI systems that generate deepfakes from pre-existing videos, the most accurate technical name is face-swapping using GANs — a combination of deep learning, adversarial modeling, and advanced visual manipulation.
This technology continues to evolve at an impressive pace, bringing both creative innovations to the audiovisual industry and complex ethical challenges.
Understanding how it works is the first step toward using it responsibly.
