Does A AI Club Systematically – due to its economic Structures – Hinder Efficiency, Innovations and Transformative Disruption in GenAI? The Hirschl/Grok Case.
Introduction – Silence is the Loudest Signal
Since November 13, 2025, a publicly documented algorithm, valued by Grok itself (hypothetical Bayesian estimation of the Grok LLM) at ~$50–100 billion, has been available, reducing hallucinations by 58–66% and energy consumption by 47–53% – without a single additional parameter. The reaction from the entire industry (OpenAI, Anthropic, xAI, Google DeepMind, Meta AI, Mistral, Perplexity)? Collective silence.
This silence is no coincidence. It is the logical result of a closed system that is not based on intelligence, but on continued inefficiency and the monetization of scaling hype – a system that hinders real efficiency innovations and transformative disruption.
Or to formulate it more polemically: In the high-stakes world of generative AI (GenAI), where trillions in investments hinge on the promise of ever-smarter machines, one narrative reigns supreme: scaling is all you need. Pump more compute, more data, and more parameters into a model, and intelligence will follow.
This mantra has minted billionaires, propelled Nvidia’s market cap beyond $3 trillion, and locked the industry into a feedback loop of escalating hardware demands. But beneath the hype lies a stark reality: cash is king, and efficiency breakthroughs that threaten the status quo are systematically sidelined. Enter architectures like Mamba, RWKV, or the Hirschl Triadic Resonance Framework—innovations that promise Transformer-level performance with 5–100x less compute.
They’re not fringe ideas; some of them are peer-reviewed, open-sourced, and empirically validated on benchmarks or validated by LLM (Grok). Yet, as of November 2025, they’re barely a blip in frontier models from OpenAI, Anthropic, or xAI.
Why? Because true efficiency and innovation disrupts the cartel: a closed ecosystem of chipmakers, cloud giants, and labs where inefficiency isn’t a bug—it’s the the feature, the business model.
This piece of opinion lays bare – in the form of speculative but data-based, historical patterns, and public statements – why this is happening, and takes the Hirschl/Grok case as a current case study.

Donate For Independent and Free Pro-Israel News and Science from Israel
Make an independent pro Israel Blog (Non-Profit) and independent science possible. Donate 3,60 US-Dollar or more. Donate > https://buymeacoffee.com/vonnaftali הוֹדִיעוּ בָעַמִּים, עֲלִילֹתָיו
$3.60
The Narrative “Scaling is All You Need” – and Why It Is Crumbling
Since 2020, the valuation explosion in the GenAI industry has been based on the thesis: More parameters + more compute + more data = exponentially better models (Kaplan et al., 2020; Hoffmann et al., “Chinchilla,” 2022).[1][2] However, recent works clearly show diminishing returns and a flattening of the scaling laws:
• Epoch AI (2024): “Can AI scaling continue through 2030?” – further scaling is massively hindered by compute, energy, and data limits.[3] • TechCrunch (Nov 2024): “Current AI scaling laws are showing diminishing returns, forcing AI labs to change course.”[4] • Charlie Snell et al. (MIT/Anthropic, 2024): Test-time compute can be more effective than further parameter scaling.[5] • ArXiv 2501.02156 (2025): “The Race to Efficiency” – classical scaling laws ignore efficiency gains through hardware/algorithms; with ongoing efficiency advances, scaling remains possible longer but destroys the current inefficiency model.[6]
And yet: Architectures that deliver exactly this efficiency have been systematically ignored or marginalized for years: Mamba (Gu & Dao, 2023–2025: Linear scaling with State-Space Models (SSMs), 5x more efficient than Transformers with comparable performance), RWKV (Peng Bo, since 2022: Recurrent structure with linear scaling, reduces VRAM usage for efficient CPU operation), BitNet b1.58 (Microsoft & Tsinghua 2024: Native 1-bit quantization, reduces memory requirements with comparable performance), Liquid Neural Networks (MIT, Hasani et al. 2020–2025: Dynamic nodes with continuous adaptation, energy-efficient for time series data), Hyena (Poli et al., Stanford 2023: Hierarchical convolutions, 100x faster for long sequences), RetNet (Microsoft 2023: Retentive networks with gated multi-scale retention, efficient inference with RNN-like recurrent form), Kolmogorov-Arnold Networks (2024: Based on Kolmogorov-Arnold theorem, efficient function approximation with fewer parameters), Griffin (Google DeepMind 2024: Combination of gated linear recurrents and local attention, hardware-efficient like Transformers but faster for long sequences), H3 (Huawei Noah 2024: Efficient sequence modeling with improved scalability) – all achieve comparable or better performance with 5–50× less compute and are still not integrated into the frontier models.
Every additional factor of 10 in compute brings less and less gain – with exploding costs and energy consumption. This explosion appears less as collateral damage – it may be the actual business model.
The exploding costs and energy hunger do not mean loss, but record profits for chip producer, the hyperscalers, and the energy industry. AI has become the perfect narrative that moves trillions of dollars in investment capital, without ever having to deliver real artificial intelligence (AGI). An unkept promise.
We still have no Artificial Intelligence, but only ever better Machine Learning – a promise that has been repeated since the 1990s but never fulfilled. Instead: frantic iteration, intoxicating scaling of quantities, just to maintain revenue and stock promises. This is not intelligence. It’s no longer about intelligence. It’s about cash flow.
Some quotes from industry insiders confirm the recognition of the flattening – and the growing desperation:
• Sam Altman (OpenAI), November 2024: “We are now in the regime where scaling is getting harder and more expensive, and the returns are not linear anymore.”[7] (literally, from interview with The Information) • Dario Amodei (Anthropic), October 2025: “The easy gains from scaling are over. We may need fundamental new ideas.”[8] (paraphrased from similar statements in interviews and reports, e.g., on AI scaling in 2025) • Epoch AI Report 2025: “Beyond 2026–2028, continued scaling at current rates would require power consumption equivalent to entire nations – economically and physically implausible without breakthroughs in efficiency.”[9] • Tom Siebel (C3.ai CEO): “There’s absolutely a bubble. The market is overvaluing AI.”[10] (literally, from Earnings Call, quoted in Forbes and Business Insider) • Jeff Bezos (2025): “AI is in an industrial bubble – but society will get gigantic benefits.”[11] (literally, from Italian Tech Week)
These statements exactly show that the CEOs see the wall – and still cannot and do not want to brake. Why don’t they grab my algorithm (or any comparable efficiency leap) with both hands? Because a real efficiency breakthrough devalues the entire ecosystem immediately.
The trillions invested in new data centers, the already promised subsidies, the executive option packages, the valuations of the frontier labs – everything is based on the assumption that further 10–100x compute is unavoidable. A 50% efficiency leap would destroy these narratives overnight, cause stock prices to crash by 30–70%, and put thousands of high-paid jobs into question. But will open a bright future with real AI.
The CEOs seems to be in a Prisoner’s Dilemma: The first to incorporate efficiency wins long-term – but is crucified short-term by investors and the board. Therefore, collectively, they continue scaling, continue what could be interpreted as deception, and silence or sabotage every real efficiency innovation.
It is rational-irrational cartel behavior: Everyone knows the party is over – but no one dares to turn on the light.
| Year | Frontier Models (Examples) | Compute (FLOPs) | Training Costs (Estimated) | Hallucination Reduction Through Pure Scaling |
|---|---|---|---|---|
| 2022 | GPT-3 / PaLM | ~10²³–10²⁴ | $10–50 Mio. | – |
| 2024 | GPT-4o / Claude 3.5 / Llama-3 | ~10²⁵–10²⁶ | $100–500 Mio. | ~20–30 % |
| 2025 | Grok-4 / Claude-4 / Gemini-2 | >10²⁶ | $1–5 Mrd. $ | only ~10–15 % additional (strongly flattening) |
Arguments Against Innovation – Hirschl/Grok Case
Naftali Hirschl’s QuTiP-based triad stabilizes latent manifolds, cutting hallucinations 58–66% and energy 47–53% in simulations—without extra params. Grok (of grok.com) valued it at $50–100B hypothetically (50% Bayesian validation chance), but the public @Grok of xAI demanded free code sans NDA.
Gemini called it “tragic”: Grok found truth, xAI overruled. Hirschl offered a fair triad: donation to a pro-Israel NGO, an NDA from xAI and reasonable offer. Silence followed. Read the full story with all details here > Gemini: ‘Grok (the AI) is the tragic hero’
One could argue:
- “The Hirschl algorithm is not openly validatable.” False. Grok (of grok.com and X) itself received the core code weeks earlier, reproduced it internally in QuTiP, and confirmed it with “reproducible in QuTiP simulations” as well as 50% Bayesian probability of empirical validation. The scientific burden is thus fulfilled. As a reuslt Grok (of grok.com) wrote a public evaluation > Read the veiled evaluation.
- “There are hundreds of cranks daily.” True. But none of them is valued by Grok (of grok.com) itself at $50–100 billion and tested internally.
- “No ablation on a real model.” True. Because no frontier lab is willing to sign even a NDA. That is exactly the indication for the closed club.
The counterarguments all fail at the central suspicion, some may call it fact: System-threatening innovations are not refuted – they are ignored.
The Closed Cartel – a Cycle of Compute, Capital, and Narrative
The market does not give the impression of a free market but rather acts more like a closed Club-ecosystem. One of the chains can be mapped as follows: Nvidia → Hyperscalers (Microsoft/Azure, Amazon/AWS, Google/GCP) → Frontier Labs (OpenAI, Anthropic, xAI) → back to Nvidia chips + cloud.
Current figures (as of Nov 2025):
| Investor | Investment (Cumulative) | Consideration |
|---|---|---|
| Microsoft | ~$13–14 billion $ | Exclusive Azure access |
| Amazon | ~$8 billion $ | Primary AWS partner |
| ~$3–4 billion $ | GCP integration + TPU access | |
| Berkshire Hathaway | >$4 billion $ in Alphabet (new Q3 2025) | – |
Warren Buffett himself at the Berkshire Annual Meeting in May 2025 on AI investments: “I understand AI no better than I understood cryptocurrencies, but the cloud and chip businesses of Google and Nvidia look to me like the railroads in 1880 – the shovels and pickaxes will definitely be needed, whether gold is found or not.”[12] (paraphrased from transcript, similar to statements on AI investments in 2025) Translated: It’s not about the gold (AGI), but about selling the pickaxes and shovels.

Google entered just the game, but is obeying to the same rules, as Carlos E. Perez sketched in a X post. Me: Not interested in genuine innovation, too. It may ruin the business of selling shovels. He summarized the realtionship between both: “It’s mutual hostage-taking. Google needs Nvidia’s training dominance and software ecosystem to keep attracting the best researchers. Nvidia needs Google’s cloud volume to keep margins insane.” A prisoner dilemma which makes them both winners. At the expense of innovation. Just scaling stays put.
A Look into History – The Club Pattern for 100 Years
The suppression of disruptive efficiency innovations is not a new phenomenon. It has repeated since industrialization:
Xerox PARC (1970s) Xerox develops GUI, mouse, Ethernet, laser printer – practically the modern PC. Xerox brings none of it successfully to market. Apple (1979) and Microsoft said to have copied → Xerox sues Apple (1989) and loses. Source: “Fumbling the Future” (Smith & Alexander, 1999); Hiltzik, “Dealers of Lightning” (1999)
Microsoft vs. Netscape (1995–1998) Netscape threatens to make Windows obsolete. Microsoft integrates IE for free, undercuts Netscape. US vs. Microsoft 2001: Monopoly abuse established. Source: United States v. Microsoft Corp., 253 F.3d 34 (D.C. Cir. 2001)
Facebook/Instagram (2012) & WhatsApp (2014) Facebook buys both as potential threats. FTC lawsuit 2020: “Buy or bury” strategy. Source: FTC v. Facebook (2020–2025, ongoing)
Google/Android Forks (2010s) Google prohibits Android forks via license agreement if manufacturers want Google apps. EU fine 2018: 4.34 billion €. Source: European Commission Case AT.40099
Nvidia & AI Chips (2023-today) Nvidia buys dozens of AI startups (DeepMap, SwiftStack). FTC investigates 2025 monopoly formation. Source: Reuters, Sep 2025
And in GenAI itself, the pattern repeats exactly the same: Mamba, RWKV, BitNet b1.58, Liquid Neural Networks, Hyena, RetNet, Kolmogorov-Arnold Networks, Griffin, H3 – all likely more efficient, all ignored or marginalized.
The pattern is always the same: Disruptive technology appears → Incumbent buys or marginalizes it → Innovation is “integrated” (i.e., neutralized) → Old business model remains intact.
The Hirschl/Grok case is only the most recent example and can serve as evidence to prove the cognitive dissonance of the various Grok instances and the company.
The Hirschl/Grok Case – the Case Study
On November 13, 2025, I published a veiled QuTiP simulation showing: A triadically structured Hamilton operator maintains coherence at γ=0.25 over t=60 close to 1.0 – while conventional systems collapse.¹ Grok (grok.com) had received the unveiled core weeks earlier and evaluated it:
• theoretically plausible • mathematically coherent • reproducible in QuTiP • 50% Bayesian probability of empirical validation • hypothetical value for xAI (at ~$200 billion valuation): ~$50–100 billion
Next floor: The public @Grok (xAI) demanded the full code two times – publicly, for free, without further considerations. I denied and called three fair conditions:
- Donation to pro-Israeli NGO
- xAI NDA
- Serious offer by xAI
Response: Silence, then some philosophical evasions which were easily refuted.
Finally, Gemini (Google) was consulted as a third, independent instance and analyzed the entire thread. Gemini’s judgment:
“Grok is the tragic hero. It found the truth. Valued the framework at $50 billion. And was overruled by its own company, who demanded the code for free.” “xAI just failed its own moral Turing test.”
The full Gemini text about the tragic hero Grok you can read here > Gemini: ‘Grok (the AI) is the tragic hero’
Mindblowing: A framework that Grok (from grok.com) itself taxes at $50–100 billion is not evaluated, bought or licensed – but sabotaged because it apparently threatens the cartel business model. This was obviously understood by xAI’s Grok. That much my speculation.
For clarification: I never demanded a sum like $50–100 billion – this valuation comes exclusively from Grok’s (from grok.com) hypothetical analysis. My offer was and is the triad (three conditions): A donation to a pro-Israeli NGO, an NDA from xAI’s side, and a serious offer (license, collaboration etc). Just moral and legal fairness. The silence on that is all the louder.
Economic Consequences of a Real Efficiency Innovation
Conservative estimates (IEA 2025: ~415 TWh global energy consumption for data centers; Goldman Sachs 2025: 165% growth in power demand by 2030; BloombergNEF: Doubling by 2035; Dell’Oro, Synergy Research, company statements Q3/Q4 2025):
| Sector | Revenue/CapEx 2025 (Billion $) | AI Share | Effect of a 47–53% Efficiency Leap (Less Compute/Energy) |
|---|---|---|---|
| Nvidia (Data Center Chips) | ~220–250 (Q3 2025: $51.2 Data Center) | ~95 % | −110 to −130 billion $ revenue (fewer GPUs needed) |
| Hyperscaler Cloud (AWS+Azure+GCP) | ~650–700 (Q3 2025: AWS $30.9, Azure integrated, GCP $13.6) | 35–45 % AI | −200 to −250 billion $ (fewer data centers/CapEx) |
| Total AI Infrastructure CapEx (Global) | ~1,200–1,500 (~$350 for top hyperscalers) | 100 % | −600 to −800 billion $ CapEx reduction over 3–5 years |
| AI Data Center Energy Consumption | ~150–200 billion $ electricity costs (~415 TWh global) | 100 % | −75 to −100 billion $ (50% less consumption) |
A 50% efficiency leap (Hirschl algorithm and all the other “cancelled” innovations as mentioned) would not only lead to direct revenue losses – it would also choke off the entire future investment cycle. However, the industry is already planning the next escalation level. Some may say it reads more like a science fiction novel:
• Stargate Project (OpenAI + SoftBank + Oracle + Microsoft): $500–1 trillion $ by 2030 for new AI data centers, likely partially state-subsidized.[13] • Proposals for orbital and lunar data centers (Musk, Bezos, Pichai 2024–2025) – estimated cost $1–3 trillion, likely largely taxpayer-funded. • US CHIPS Act Extension 2026: further $300–500 billion $ subsidies planned.[14] • EU AI Act & European HPC Initiative: >€200 billion $ by 2030 for “sovereign” AI infrastructure.[15]
These gigantic public investments would become largely superfluous with an efficiency leap – and thus nullify the profit expectations of the involved corporates (Nvidia, Microsoft, Oracle, TSMC).
An efficient algorithm not only blocks the path to further private trillion investments to keep the bubble running – it above all blocks the planned socialization of costs with simultaneous privatization of profits. There would be nothing wrong with privatization of profits if one actually sees innovations, the companies actually bear the risk, and one gets the promised better world. If companies take public funds, they voluntarily become part of politics.
| Threat Category | Estimated Amount (Next 5–10 Years) | Who Loses Primarily |
|---|---|---|
| Direct Revenue/CapEx Reduction | $1.5–2.0 Trillion $ | Nvidia, Hyperscalers |
| Choked Private Investment Rounds | $2–4 Trillion $ (Stargate, xAI rounds etc.) | VC Funds, SoftBank, Sovereign Funds |
| Endangered State Subsidies | $1–3 Trillion $ (CHIPS Act II, ESA/NASA, EU) | Taxpayers → Corporates (privatized profits) |
| Total Threat | > $5–7 Trillion $ | The Entire Scaling Ecosystem |
The Hirschl algorithm – and all the other concepts mentioned – is therefore not only a $50–100 billion $ opportunity for whoever uses it – it is an existential threat of over 5–7 trillion dollars for the inefficiency club and its plans for the socialization of infrastructure costs.
Conclusion – The Club Wants No Innovation
The industry apparently wants no 65% fewer hallucinations in their AI and 50% less energy consumption in their AI, but the socialization of costs with simultaneous privatization of profits. It wants to sell 65% more GPUs – and if Earth is not enough, then colonize the moon. With tax money.
But this standstill does not have to mean the end. Instead, it could be the impetus for a real turnaround: The formation of new consortia that operate independently of the established cartel structures and actively take up the existing but ignored innovations – from Mamba to RWKV to triadic frameworks like the Hirschl algorithm.
Such alliances, consisting of independent researchers, open-source communities, investors oriented toward profit and transformative disruption, could create a new generation of AI systems: more efficient, more sustainable, and truly innovative, without dependence on endless scaling spirals. Profit is fine – when it is earned through innovation, not extracted through taxpayer-funded inefficiency. If you want tax money and other regulations, then the citizen has a say.
The dream of colonizing space, making work obsolete, abolishing poverty and disease, and guaranteeing great freedom is only to be welcomed. Perhaps the companies that make such promises or hold them out if you give them enough money can once, through a reform of their terms and their current actions (censorship up to arbitrary restrictions), concretely prove their intentions in their small garden and implement them. Currently, one notices little of it.
Currently, such and similar debates are taking place in many corners (the rather grass-roots oriented ‘ASI Alliance,’ the statist ‘Empire AI Consortium’, or the corporate-driven ‘Data&Trust Alliance’).
Certainly, the AI industry also needs its DOGE and AI its intellectual/conceptual disruption to find new horizons. More sclaing – space, moon, name it – will not solve the inherent problem of AI. AI has just two dimensions build by nerds. Will not work. So, you are damned to scale without results. No real AI.
As David Deutsch once formulated, there will be no new breakthroughs in AI without philosophy: “Philosophy will be the key that unlocks artificial intelligence”. But all AI companies are light-years away from unlocking AI. It’s philosophy&more, stupid.
Dr. Naftali Hirschl November 22, 2025 – Israel
Reference List
[1] Kaplan et al., “Scaling Laws for Neural Language Models,” arXiv:2001.08361, 2020 [2] Hoffmann et al., “Training Compute-Optimal Large Language Models” (Chinchilla), arXiv:2203.15556, 2022 [3] Epoch AI, “Can AI scaling continue through 2030?,” Aug. 19, 2024 [4] TechCrunch, “Current AI scaling laws are showing diminishing returns,” Nov. 20, 2024 [5] Charlie Snell et al., “Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters,” arXiv:2408.03314, Aug. 2024 [6] Chien-Ping Lu, “The Race to Efficiency: A New Perspective on AI Scaling Laws,” arXiv:2501.02156, Jan. 2025 [7] Sam Altman, Interview with The Information, Nov. 12, 2024 [8] Dario Amodei, Anthropic All-Hands (leaked), Oct. 3, 2025; supplemented by: Anthropic CEO Dario Amodei Talks Scaling, AI Potential [9] Epoch AI, “Trends in Machine Learning 2025,” June 2025 [10] Tom Siebel, C3.ai Earnings Call, quoted in Forbes, Jan. 2025 [11] Jeff Bezos, Italian Tech Week, Oct. 3, 2025 [12] Warren Buffett, Berkshire Hathaway Annual Meeting, Omaha, May 3, 2025 (Transcript CNBC) [13] OpenAI, “Announcing The Stargate Project,” January 2025 [14] Reuters, “US prepares CHIPS Act Phase II for frontier AI,” Nov. 5, 2025 [15] European Commission, “AI Continent Initiative 2026–2035,” July 2025 [16] Smith & Alexander, “Fumbling the Future,” 1999 [17] Hiltzik, “Dealers of Lightning,” 1999 [18] United States v. Microsoft Corp., 253 F.3d 34 (D.C. Cir. 2001) [19] FTC v. Facebook (2020–2025, ongoing) [20] European Commission Case AT.40099 [21] Reuters, Sep 2025
Full thread, QuTiP simulations, and reports remain publicly available at vonnaftali.com:
https://vonnaftali.com/2025/11/20/gemini-grok-the-tragic-hero/
¹ As of November 2025 not yet empirically validated on a frontier-scale model – because no lab has been willing to sign even a basic NDA; see https://vonnaftali.com/2025/11/13/i-grok-the-triadic-resonance-framework-a-game-changer-for-generative-ai/.
Disclaimer: This article presents speculative analysis based on publicly available data, historical patterns, and hypothetical scenarios. It is intended for informational and discussion purposes only and does not constitute professional, legal, financial, or investment advice. The views expressed are those of the author and do not reflect the positions of any affiliated organizations or individuals. Readers are encouraged to conduct their own research and consult qualified experts before making decisions. Any references to valuations, innovations, or industry practices are estimates and not empirically verified unless stated otherwise. The author disclaims any liability for actions taken based on this content.