Home
About
Projects
Blog

The Cost of Creation

April 29, 2025
6 min read

Introduction: The Generative AI Boom

Generative AI has ushered in a new era of creativity, enabling the creation of hyper-realistic images, videos and text with unprecedented ease. Tools like Framepack, which can generate near-cinematic videos on consumer-grade hardware, exemplify this technological leap. The accessibility of these tools has democratised innovation, but it also raises a critical question: where do we draw the ethical line to ensure innovation remains safe? I've seen firsthand how these technologies can transform industries, but also how they can be misused if left unchecked.

Commercialised Generative Models: A Double-Edged Sword

The rise of prebuilt models, such as open-source LoRAs and checkpoints like Stable Diffusion, has lowered the barrier to entry for developers. This democratisation has fueled rapid prototyping in gaming, media and education - enabling small teams to create content that rivals big studios. However, the same accessibility allows bad actors to bypass safeguards, creating NSFW content or malicious outputs. For instance, open-source models can be fine-tuned to generate harmful material, highlighting the need for ethical oversight in model distribution.

Positive Impacts

  • Gaming: Developers use AI to generate textures and animations, speeding up production.
  • Media: AI-driven video editing tools streamline content creation for filmmakers.
  • Education: Personalised learning materials are created using AI, enhancing student engagement.

Negative Implications

  • NSFW Content: Uncensored models can produce explicit material, often without consent.
  • Malicious Use: AI-generated misinformation or propaganda can spread rapidly, as seen in cases of deepfake political videos.

Framepack: Revolutionising Video Generation

Framepack, developed by Stanford researchers, is a neural network architecture that enables efficient video generation with just 6GB of VRAM. By compressing input frames based on their importance, it maintains quality over long videos, making it a game-changer for content creators. Its applications are vast, from marketing campaigns to animated shorts, but its potential for harm is equally significant.

Beneficial Uses

  • Marketing: Businesses create engaging video ads quickly and cost-effectively.
  • Animation: Independent animators produce high-quality content without expensive hardware.

Harmful Uses

  • Deepfakes: Framepack's realism can be exploited to create convincing fake videos. Research suggests Framepack's realism can enable deepfake creation - as seen in the Arup case in Hong Kong.
  • Revenge Porn: Non-consensual synthetic content can harm individuals' reputations and mental health.

The Arup Deepfake Scam

In 2024, Arup, a multinational engineering firm, lost $25.6 million due to a deepfake scam in Hong Kong. Scammers used AI to simulate a video conference with the CFO and other employees, convincing a finance worker to transfer funds to fraudulent accounts.

The scam began with a phishing message purportedly from Arup's UK-based CFO, instructing the employee to execute a secret transaction. Despite initial doubts, the employee was convinced by a group video call where the CFO and others appeared realistic, all of whom were deepfake recreations. The employee then made multiple transfers - totaling $25.6 million, to fraudulent accounts in Hong Kong. The incident was uncovered a week later, highlighting the difficulty in detecting such scams in real-time.

The Pornification Problem & CSAM Crisis

The rise of AI-generated pornography, including synthetic nudity, celebrity deepfakes and the proliferation of AI-generated child sexual abuse material is of growing concern. The Internet Watch Foundation reported over 20,000 AI-generated CSAM images on a dark web forum in a single month. The National Center for Missing & Exploited Children noted 4,700 related reports in 2023.

Policy Responses

UK: New laws will criminalise possession and creation of AI tools for CSAM, with penalties up to five years BBC News.

US: California's bill expands the definition of obscene matter to include AI-generated CSAM Inside Global Tech.

Global Efforts: The EU's AI Act, effective from August 2025, classifies high-risk AI systems, including those capable of generating harmful content EU AI Act.

Challenges

Enforcement is complicated by the global nature of AI tools and the anonymity of dark web platforms. Experts like Prof Clare McGlynn argue that gaps remain, such as the need to ban "nudify" apps BBC News.

Generative AI models often rely on internet-scraped data, raising significant privacy concerns. Personal images and data are used without explicit consent, violating individual rights OAIC. For example, publicly available photos can be incorporated into training datasets, potentially leading to unauthorised use of one's likeness.

Key Issues

  • Lack of Transparency: Companies rarely disclose data sources, making it hard to verify consent.
  • Legal Frameworks: Regulations like GDPR and CCPA require explicit consent for personal data use, but compliance is inconsistent Terms.law.
  • Ethical Concerns: Using personal data without permission undermines trust in AI technologies.

Proposed Solutions

  • Synthetic Data: Generating artificial data to reduce reliance on real personal information.
  • Differential Privacy: Techniques to anonymise data while preserving model utility.
  • Clear Consent Mechanisms: Requiring explicit user permission for data use in AI training.

The Need for Guardrails: Regulation, Transparency and Access Control

To address these challenges, experts propose a range of guardrails to ensure responsible AI use. These include stricter model release protocols, licensing for high-risk tools, watermarking AI-generated content and non-removable content filters ResearchGate. Australia's AI Ethics Principles provide a voluntary framework emphasising fairness, privacy and accountability Australian Government.

| Solution | Description | Example | | --- | --- | --- | | Stricter Model Release | Limiting access to high-risk models to verified developers. | Licensing for AI tools capable of generating CSAM. | | Watermarking | Embedding identifiers in AI-generated content for traceability. | Google's SynthID for image watermarking. | | Content Filters | Non-removable filters to block harmful outputs. | OpenAI's moderation tools for ChatGPT. | | Transparency Requirements | Mandating disclosure of training data sources and model capabilities. | EU AI Act's transparency obligations EU AI Act. |

Regulatory Landscape

Australia: Voluntary AI Ethics Principles guide ethical AI development with proposals for mandatory guardrails in high-risk settings DLA Piper.

EU: The AI Act classifies AI systems by risk with strict rules for high-risk applications effective from August 2025.

US: State-level bills target AI-generated CSAM and algorithmic discrimination - though federal legislation remains uncertain Inside Global Tech.

Tension with Open-Source Culture

The open-source community values unrestricted access to AI models, but this can conflict with ethical responsibilities. Balancing openness with safety is a key challenge, as seen in debates over Stable Diffusion's unrestricted releases.

Conclusion: A Call for Ethical Engineering

Generative AI's promise is undeniable, from revolutionising education to enhancing creative industries. However, its risks—privacy breaches, AI-generated CSAM and deepfakes—demand urgent action. I believe thoughtful regulation and ethical engineering are overdue. We must advocate for responsible AI use, support robust regulations and engage in open discussions to ensure AI serves humanity without causing harm. The future of AI depends on our ability to navigate these complexities with empathy and foresight.

Share this article

Share: