AI Energy Consumption: Silence from the Experts


  • Sam Altman, CEO of OpenAI, stated that an average ChatGPT query consumes 0.34 watt-hours of energy, comparable to brief usage of an oven or lightbulb.
  • Concerns arise regarding the lack of context for this figure, including how "average" queries are defined and if training and server cooling are factored into the energy use.
  • Experts, like Sasha Luccioni from Hugging Face, question the credibility of Altman’s energy estimate and the absence of detailed information from OpenAI.
  • Research highlights the urgent need for transparency in AI’s environmental impact, noting that 84% of large language model (LLM) usage has no environmental disclosures.
  • Discrepancies exist in energy consumption claims, such as ChatGPT requests allegedly using ten times more energy than Google searches, which lacks solid evidence and arises from unverified statements.

+

Get Details

California Unveils Key Frontier AI Policy Report


  • Release of California Report: On June 17, 2025, the Joint California Policy Working Group on AI Frontier Models published “The California Report on Frontier AI Policy,” promoting a "trust but verify" framework for AI governance.

  • Key Recommendations: Proposed measures include enhanced transparency for AI training data, mandatory adverse event reporting, and a third-party risk assessment framework for independent evaluation of AI systems.

  • Proportionate Regulatory Thresholds: The report suggests adaptive regulatory thresholds based on multiple factors, such as model capabilities and risk levels, rather than solely computation-based metrics.

  • Regulatory Philosophy: Emphasizing early policy intervention, the report draws from previous technology governance experiences, with potential phased legislative action anticipated during the 2025–2026 session.

  • Challenges Ahead: While the report aims for evidence-based governance, it acknowledges significant opacity in AI systems and the difficulty of verifying true safety, highlighting the need for flexible compliance frameworks in a rapidly evolving landscape.

+

Get Details

DeepSeek’s R1 AI Matches Google and Anthropic in Coding能力 Benchmark


  • DeepSeek’s updated R1 model has matched the coding performance of Google and Anthropic in the WebDev Arena competition, scoring 1,408.84.
  • The model tied for first place with Google’s Gemini-2.5 and Anthropic’s Claude Opus 4, demonstrating strong capabilities in coding tasks.
  • DeepSeek’s R1 has shown consistent performance close to leading models in various benchmark tests since its launch in January.
  • The R1-0528 update included improvements in reasoning and creative writing, as well as a 50% reduction in hallucinations.
  • DeepSeek’s open-source approach has facilitated rapid adoption and influenced other tech giants in China to consider similar strategies.

+

Get Details

AI Readiness: Focus on Use Cases and Data, Experts Advise


  • Vendor Risk Awareness: Evaluate vendor protections, especially regarding generative AI, and confirm their support and legal backing in case of issues.
  • Indemnification by AI Vendors: Key vendors like Google, Amazon, and IBM offer indemnification to protect against accidental copyright infringements.
  • Training and Cultural Shift: Effective adherence to policies requires comprehensive training and a cultural shift reflected in staffing decisions, emphasizing AI governance.
  • Role of IT in AI Readiness: IT decisions must align with budget constraints and organizational readiness, impacting workload management between on-premises and cloud environments.
  • Strategic Infrastructure Decisions: Moving IT back on-premises can be driven by cost concerns, while leveraging cloud resources can be beneficial if supported by knowledgeable staff.

+

Get Details

Leaked: US Government’s Complete AI Plan Revealed


  • President Trump signs a bill against California’s plan to ban gas-powered cars by 2035 during a White House event.
  • A leaked GitHub repository reveals the Trump administration’s AI.gov platform, set to launch on July 4, aimed at automating government operations.
  • The AI.gov platform will feature an AI chatbot, API for connecting models from major AI companies, and a monitoring system called "CONSOLE."
  • Thomas Shedd, a former Tesla engineer, leads the initiative, seeking to operate the GSA like a software startup, despite internal concerns about security.
  • Experts warn that widespread AI adoption could lead to significant security risks, including potential workplace surveillance and processing of sensitive data.

+

Get Details

Study: AI Mimics Human Thought Processes


  • Chinese researchers discovered that AI models like ChatGPT process information similarly to the human mind, indicating fundamental similarities in conceptual knowledge.
  • The study involved LLMs performing “odd-one-out” trials, revealing that AI can categorize objects in ways akin to human cognition, particularly in language grouping.
  • AI generated 66 conceptual dimensions to sort objects and showed strong alignment with human neural activity patterns, suggesting human-like conceptual representations.
  • However, AI struggles with visual categorization and deeper cognitive tasks like analogical thinking, raising questions about their understanding of object significance and emotional value.
  • The researchers aim to develop more human-like artificial cognitive systems for better collaboration between AI and humans.

+

Get Details

Shadow AI in Government


  • Widespread AI Adoption: Over half of public-sector employees use AI daily, with varying availability across state and federal agencies, raising concerns about unregulated use of "shadow AI."

  • Procurement Challenges: Low-cost AI tools often escape traditional budget oversight due to small transaction amounts, complicating procurement processes in government.

  • Risks of Unapproved Tools: Employees frequently use unauthorized AI tools to circumvent slow purchasing and strict IT rules, leading to potential exposure of sensitive data.

  • Integration Issues: AI embedded in everyday software can create confusion about approved tools, increasing the risk of data breaches and compliance challenges.

  • Recommendations for Mitigation: Enhance visibility into AI use, streamline procurement processes, establish clear policies, and educate the workforce on permitted AI use to foster innovation while managing risks.

+

Get Details

NY Lawmakers Pass Bill Addressing Frontier AI Risks


  • RAISE Act Overview: New York’s Responsible AI Safety and Education Act aims to impose strict safety protocols on labs investing over $100 million in frontier AI models.

  • Safety Measures: The act mandates public safety protocols, third-party audits, and 72-hour incident reporting for AI incidents involving significant harm.

  • Critical Harm Definition: "Critical harm" is defined as an event causing at least 100 deaths or over $1 billion in damages, with civil penalties for violations reaching up to $30 million.

  • Whistleblower Protections: New anti-retaliation clauses safeguard employees who report substantial risks of critical harm, aiming to encourage transparency within AI labs.

  • Regulatory Landscape: The RAISE Act sets a precedent amid fragmented regulations across states and a slow federal response, positioning New York as a leader in AI policy.

+

Get Details

Concerns Grow Over UK’s Humphrey AI Tool and Big Tech Reliance


  • The UK government’s AI tool, Humphrey, utilizes models from OpenAI, Anthropic, and Google, prompting concerns over reliance on big tech for civil service reform.
  • All officials in England and Wales are set to receive training on Humphrey, aiming to enhance public sector efficiency without overarching commercial agreements with tech companies.
  • Critics raise alarms about AI’s use of copyrighted material, with public figures from the creative sector advocating for stronger protections against unauthorized use.
  • The government plans to maintain transparency and evaluate the accuracy of AI tools while managing costs, which are expected to decrease as AI models improve.
  • Officials assert that integrating AI tools like Humphrey will not hinder their regulatory capabilities, emphasizing the potential of AI to streamline administrative tasks in public services.

+

Get Details

Strong Data Strategy Crucial for Boosting Gen AI Returns


  • IBM predicts significant ROI from AI and generative AI investments within 18-24 months, contingent on a solid data strategy.
  • Successful AI adoption requires streamlining data, establishing AI policies, reskilling employees, and promoting cultural transformation.
  • 80% of AI implementation time is spent on data preparation, with a strong foundation being vital for effective AI use.
  • A phased approach is crucial for enterprises, starting with data collection and quality, followed by ETL processes, then reporting integration and governance.
  • Effective AI scaling relies on people and processes, emphasizing the necessity for strong data foundations to ensure meaningful business value from AI initiatives.

+

Get Details