Generative AI Global Intelligence Briefing (September 2025)
Executive Summary: The State of Generative AI – September 2025
In September 2025, the generative AI sector is rapidly maturing beyond a phase of pure technological discovery into a more complex era defined by strategic commercial maneuvering, intensifying legal and regulatory conflict, and the establishment of geopolitical spheres of influence in AI. Our analysis of this month’s developments reveals five key themes that signal a structural shift in the industry.
First is “The End of Exclusivity.” The landmark restructuring of the Microsoft-OpenAI partnership symbolizes the market’s transition from an era of singular, monolithic alliances to a more competitive, multi-cloud, multi-model ecosystem. This heralds the “unbundling” of the AI value chain, where enterprises will increasingly adopt “best-of-breed” strategies, free from vendor lock-in.
Second is “The Shifting Battleground of Intellectual Property.” The lawsuit filed by Penske Media Corporation (PMC) against Google has redefined the debate over AI training data, moving it from a narrow copyright issue to a broader antitrust question of market dominance. This has profound implications for the business model of the open web, suggesting a new phase in the battle over content value versus platform power.
Third is “The Solidification of Regulatory Divergence.” The world is fracturing into three distinct AI governance regimes: the European Union (EU), Japan, and the United States. This regulatory fragmentation will compel global companies to adopt modular compliance strategies tailored to each jurisdiction.
Fourth is “Japan’s National Ambition.” Through a combination of massive, state-coordinated investment in domestic infrastructure and a uniquely permissive legal framework, Japan is positioning itself as a powerful and attractive hub for global AI development. This reflects a clear industrial policy aimed at establishing AI sovereignty.
Finally, there is a clear trend “From Technology to Application.” The focus of innovation is shifting from the capabilities of foundational models themselves to their practical application in agentic workflows, multimodal content creation, and highly personalized enterprise solutions. This indicates AI’s evolution from a mere assistive tool to an active agent capable of automating entire business processes.
This report will delve deeply into these critical trends to illuminate the strategic challenges and opportunities facing Japanese companies and investors.
Chapter 1: The Great Unbundling: Strategic Realignments in the AI Power Structure
The early-stage strategic partnerships that once defined the AI industry are dissolving, giving way to a more fluid, competitive, and multipolar market. This shift signifies the “unbundling” of the AI value chain, heralding a new era where companies are no longer dependent on a single vendor but are free to combine the best models, infrastructure, and applications.
1.1 Microsoft and OpenAI’s Détente: A Redefined Partnership
The most iconic alliance in the AI industry has undergone a fundamental overhaul. After months of negotiations, Microsoft and OpenAI announced the signing of a non-binding memorandum of understanding (MOU) to restructure their partnership. This move, prompted by rising tensions over exclusivity and computational constraints, should be viewed not merely as a relationship adjustment but as a strategic “decoupling” that will catalyze a market-wide structural change.
The core of this MOU is the newfound freedom granted to OpenAI. The most significant change is that OpenAI is no longer exclusively tied to Microsoft Azure and can now collaborate with multiple cloud providers. This shift has already been concretized by OpenAI’s recent large-scale contract with Oracle. Furthermore, the agreement paves the way for OpenAI to transition from its complex capped-profit structure to a more conventional Public Benefit Corporation. This restructuring will see OpenAI’s non-profit parent organization manage a stake in the for-profit entity valued at over $100 billion.
Microsoft, for its part, has secured its position. It will maintain its massive investment of approximately $13 billion and retain “preferred access rights” to OpenAI’s technology. However, the specific details of this priority access are still being finalized. This restructuring holds strategic significance for both companies. For OpenAI, it provides access to a broader pool of capital and computational resources necessary to support its explosive growth. For Microsoft, it offers an opportunity to de-risk its AI strategy. Specifically, by integrating Anthropic’s Claude models into its products and accelerating its own in-house model development, Microsoft can reduce its dependency on a single partner. This is a consequence of the AI market’s maturation, where the overly tight partnership models of the early days have proven unsustainable for large-scale deployment.
1.2 Anthropic’s Enterprise Offensive: The Battle for Workflows
As the Microsoft-OpenAI relationship evolves, competitor Anthropic is intensifying its offensive to establish a foothold in the enterprise market. The company has announced a suite of powerful new features for its AI model, Claude, aimed directly at enhancing corporate productivity. These include a persistent “memory” function that retains context and the ability to directly create and edit files such as Excel, PowerPoint, PDF, and Word.
The “memory” feature, an optional function for Team and Enterprise plan users, allows Claude to remember context, user preferences, and project details across conversations. This eliminates the need for users to repeatedly provide background information, enabling them to handle complex tasks more efficiently. Furthermore, separate memories are created for each project, preventing the confusion of different workstreams. An “incognito mode” has also been introduced for all users, allowing for context-free conversations that are not saved in history or memory.
Even more groundbreaking is the agentic file creation capability. Claude can now execute code in a sandboxed, secure environment to analyze data and generate ready-to-use documents. This transforms Claude from a mere conversational partner or advisor into a “collaborator” that actively performs tasks based on user instructions.
These enhancements clearly indicate that Anthropic is directly targeting the high-value enterprise productivity market, a traditional stronghold of Microsoft. By moving beyond simple chat functions to automate complex, multi-step, document-based workflows, Anthropic is positioning Claude as an indispensable tool for knowledge workers. This is a strategy to build a more defensible and “sticky” product, symbolizing the shift in AI’s value from mere information generation to concrete task execution.
1.3 Google’s Multimodal Strategy: Dominating the Creative Stack
Google is pursuing a different competitive strategy, avoiding a direct confrontation in LLM-based chat and instead leveraging its extensive ecosystem. At its core is the construction of an advanced multimodal content creation platform. The company has implemented a massive update across its entire generative AI suite, dramatically improving its video and image generation and editing capabilities.
In the video generation space, the latest model, “Veo 3,” has been announced. It is groundbreaking not only for its ability to generate high-definition 1080p quality videos but also for its capacity to add AI-native, synchronized audio (including dialogue, sound effects, and ambient sounds). This enables the production of more polished content without the need for traditional post-processing. Furthermore, it supports the vertical format popular on social media and has significantly reduced API pricing to encourage developer adoption.
In the image generation and editing space, “Gemini 2.5 Flash Image” (nicknamed “Nano Banana”) is garnering significant attention. This model excels at maintaining character consistency across multiple edits, allows for iterative editing through natural language conversation, and features a style transfer function. In particular, its ability to generate highly realistic 3D figurines from photos has gone viral on social media, achieving explosive success.
The brilliance of Google’s strategy lies in the deep integration of these advanced tools with its powerful platforms. By incorporating Veo 3 into the YouTube Shorts creation tool and making Nano Banana available for free experimentation through AI Studio , Google aims to control the entire creative workflow, from idea generation to content distribution. This is a powerful strategy that maximizes the company’s strengths in consumer products and large-scale data processing, shifting the competitive focus in the AI market from pure model performance to an integrated platform experience.
Chapter 2: The Content Wars: The Future of Generative AI and Intellectual Property
The legal battles over the use of training data by AI are reaching a critical turning point. The focus of the debate is shifting from the complexities of copyright theory to the tangible economic damages suffered by the content industry. This issue is expanding beyond the confines of intellectual property law into the realm of antitrust, shaking the very foundations of the digital economy.
2.1 The Landmark Lawsuit: Penske Media Corp. v. Google
Penske Media Corporation (PMC), the publisher of prominent media outlets such as ‘Rolling Stone’ and ‘Variety,’ has filed a major lawsuit against Google. The suit alleges that Google’s search feature, “AI Overviews,” illegally uses publishers’ content and diverts traffic from their websites. This is the first time a major U.S. publisher has sued Google over its AI search function, and the entire industry is closely watching the outcome.
The core of PMC’s argument extends beyond simple copyright infringement. They frame the issue as an antitrust violation, specifically “reciprocal dealing.” According to the complaint, Google is abusing its overwhelming monopoly power in the general search services market, with a share of approximately 90%, to force publishers into a “Hobson’s choice” (a situation with no real alternative). Specifically, if publishers do not allow their content to be used in AI Overviews, they risk being penalized with lower search rankings and losing traffic. This is alleged to be a classic illegal tying arrangement, where dominance in one market (search) is used to gain an unfair advantage (free content) from suppliers in an adjacent market (AI-powered answer generation).
The economic damages resulting from this practice are also detailed. The increase in “zero-click searches,” where answers are provided directly on the search results page, has led to a significant decrease in referral traffic to PMC’s sites, severely impacting advertising and affiliate revenue. In fact, affiliate revenue has reportedly dropped by more than a third from its peak. In addition to the antitrust claims, the lawsuit also includes allegations of unjust enrichment (Google unfairly profiting from the substantial investments made by publishers) and that the scraping of content for training purposes constitutes “systematic copyright infringement”.
In response, Google has countered that AI Overviews provides new opportunities for users to discover a wider range of sites, ultimately driving traffic to them, and has stated its intention to fight the “baseless claims” in court.
2.2 The Shifting Legal Landscape: From Fair Use to Market Dominance
The PMC lawsuit signifies an evolution in the legal tactics of content creators, moving from a “defensive” posture of protecting copyright purity to an “offensive” one attacking the market power of AI platform operators. Previously, the debate over training AI models on copyrighted works primarily revolved around the U.S. legal doctrine of “fair use”. AI developers argued that use for training purposes is “transformative,” creating new value without substituting the market for the original work. Creators, on the other hand, contended that AI-generated content harms the potential market for their original works. The U.S. Copyright Office has stated that AI-generated outputs are only eligible for copyright protection if there is sufficient human creative control, placing purely AI-generated content in the public domain.
However, the fair use debate is legally complex and outcomes are difficult to predict. PMC’s decision to anchor its lawsuit in antitrust law is a strategic shift to overcome this uncertainty. While the interpretation of fair use is ambiguous, Google’s monopoly position in the search market is a legally established fact from the Department of Justice’s prior antitrust lawsuit. By building on this established fact, PMC can challenge Google’s business practices on a more solid legal footing.
This lawsuit is not just about PMC; it represents the collective frustration of the industry. Other publishers have filed similar lawsuits against AI company Cohere for scraping content behind paywalls to provide “substitutional summaries”. Furthermore, the advertising and publishing industries were disappointed that the remedies in the Google search antitrust trial did not sufficiently curb Google’s market power, and they now see PMC’s direct lawsuit as a new and potentially more effective countermeasure.
The outcome of this legal battle will set a crucial precedent for all Japanese digital businesses that rely on web traffic, including media, e-commerce, and digital marketing. If PMC wins, global platforms like Google may be forced to redesign their AI search functions to be more publisher-friendly, for example, by mandating clear links to sources or introducing revenue-sharing models. Conversely, if PMC loses, the decline in referral traffic will likely accelerate, forcing companies to undertake a fundamental and painful transition from traditional business models dependent on advertising and affiliate revenue to direct monetization models like subscriptions and data product sales. Japanese media companies must prepare for both scenarios and adjust their business strategies accordingly.
Chapter 3: A Fractured World: Navigating the Divergence of Global AI Regulation
Global AI governance is not converging towards a unified standard but is clearly fracturing into three major regulatory regimes, each with a different philosophical underpinning. The European Union (EU) has adopted a “fortress” model prioritizing rights protection, Japan has chosen an “open” model to foster innovation, and the United States is pursuing a “market-driven” approach shaped by geopolitical competition. This regulatory divergence makes it difficult for global companies to cover the world market with a single product or service, necessitating a deep understanding of each jurisdiction’s characteristics and the construction of modular compliance strategies.
3.1 The European Union’s “Fortress Europe”: Regulation as a Product Standard
The EU’s AI regulation is embodied in the comprehensive and legally binding “EU AI Act.” The foundation of this law is a risk-based approach that classifies AI systems and imposes different obligations based on their risk level. Its most significant feature is the principle of “extraterritorial application,” meaning that even companies without a physical presence in the EU are subject to the law if their AI systems are placed on the EU market or if their output is used within the EU.
As of September 2025, the AI Act is being implemented in stages. The use of AI with “unacceptable risk,” such as social scoring or subliminal behavioral manipulation, which poses a significant threat to public safety and individual rights, is already prohibited. As of August 2025, obligations for providers of General-Purpose AI (GPAI) models have come into effect. These include transparency requirements, such as publishing a summary of copyrighted works used for training, supported by a new Code of Practice and official guidelines. The “AI Office,” responsible for enforcement, and the “AI Board,” for coordination among member states, have also officially begun their activities.
For businesses, the most critical aspect is the strict obligations for AI systems classified as “high-risk.” This category includes AI used in areas like employment, critical infrastructure management, and law enforcement. Providers must meet a wide range of requirements, including establishing risk management systems, using high-quality training data, ensuring human oversight, and maintaining detailed technical documentation. Failure to comply can result in substantial fines, making compliance a critical management issue for companies.
3.2 Japan’s “Innovation-First” Strategy: Regulation as a Competitive Advantage
In stark contrast to the EU’s stringent approach, Japan has chosen a completely different path for AI regulation. The “Act on the Promotion of Research, Development, and Utilization of AI-Related Technologies” (commonly known as the AI Promotion Act), which came into full effect in September 2025, clearly articulates this philosophy.
Underpinning this law is the government’s strong ambition to become the “world’s most AI-friendly country”. Instead of the EU’s risk-based, pre-emptive regulation, Japan has adopted a “light-touch” and “innovation-first” approach, deliberately avoiding strict rules and penalties that could stifle innovation. The AI Promotion Act is positioned as “soft law,” encouraging voluntary cooperation from businesses, rather than “hard law” that imposes direct obligations and penalties.
Its implementation is guided by the “AI Basic Plan,” formulated by the “AI Strategy Headquarters,” chaired by the Prime Minister. This plan is expected to include government measures such as promoting research and development, fostering human resources, and contributing to the formation of international norms. The law itself contains no penalties, but illegal acts using AI will be punished under existing laws. The government also has the authority to investigate cases where the rights and interests of citizens are infringed and has indicated the possibility of a “name-and-shame” approach, publicizing the names of non-compliant companies to indirectly encourage compliance through reputational risk. This approach strongly reflects Japan’s industrial policy thinking, which views regulation not as a hindrance to innovation but as a strategic tool for winning international competition.
3.3 The U.S. “Market-Driven” Patchwork: Regulation as a Geopolitical Tool
The United States does not have a comprehensive, single federal AI law like the EU or Japan. Its approach is a complex “patchwork” of non-binding presidential executive orders, guidelines issued by individual agencies, and a growing number of state-level laws.
The fundamental stance at the federal level is to maximize innovation and maintain global leadership, especially in competition with China. The “AI Action Plan” announced by the Trump administration and the “SANDBOX Act” proposed in Congress are symbolic of this ideology. These policies aim to create “sandbox” systems that temporarily exempt developers from existing regulations, allowing them to test new AI technologies without being hindered by regulatory barriers. Meanwhile, major tech companies, wary of a proliferation of differing state regulations, are actively lobbying for the enactment of a more lenient and unified federal law.
In the absence of comprehensive federal legislation, states like Colorado, Illinois, California, and New York are independently advancing their own AI regulations. These state laws often focus on specific applications, such as ensuring transparency in the use of AI for hiring, protecting personal data, and mandating the reporting of AI-driven layoffs. This creates a complex compliance environment for companies operating nationwide, as they must adhere to different regulations in each state. This decentralized, market-driven approach is a driving force for innovation in the U.S., but it also entails challenges of legal uncertainty and increased compliance costs.
Table 1: Comparative Analysis of AI Regulatory Frameworks (September 2025)
