AI Gamble: Be Smart, Not Just Fast


  • Post-9/11, the US established "fusion centres" for real-time intelligence sharing, evolving to AI integration by 2025 for enhanced data analysis.
  • AI replaces human roles in these centres but poses risks, demonstrated by instances where generative models like ChatGPT provided false legal information.
  • Public organizations face challenges in selecting AI strategies, including reliance on third-party models, open-source solutions, or developing proprietary AI systems.
  • The adoption of AI has shown biases, raising concerns about discrimination against marginalized groups, emphasizing the need for careful implementation and oversight.
  • Successful AI integration requires phased deployments, ongoing oversight, and the development of responsible use cases, as shown by the DHS’s approach to AI in government operations.

+

Get Details

ULA Tests Government-Compliant Chatbot by OpenAI


  • United Launch Alliance (ULA) is piloting "RocketGPT," an AI chatbot by OpenAI designed for defense contractors handling sensitive data, deployed to 150 employees.
  • Operating on Microsoft Azure’s secure cloud platform, RocketGPT complies with strict government security standards (ITAR) for aerospace and defense information.
  • ULA’s CEO, Tory Bruno, emphasizes the AI’s role as a research assistant to streamline time-consuming tasks, rather than replacing human workers.
  • OpenAI has secured a $200 million contract with the Pentagon to develop AI capabilities for national security, marking a significant victory for the company.
  • Although excited about AI potential, Bruno advises maintaining realistic expectations, noting that AI requires extensive training and human oversight for accuracy.

+

Get Details

Power Grid Bottlenecks May Hinder Lee’s AI Goals


  • President Lee Jae Myung emphasizes AI industry growth as a key economic strategy, pledging government support for investment during his Ulsan trip.
  • Lee’s control over both the administration and legislature facilitates implementation of his promises to reduce the gap with global AI leaders.
  • Appointments of seasoned experts in AI and policy reflect a shift towards merit-based leadership in Lee’s administration.
  • South Korea aims to significantly enhance its renewable energy generation, currently lagging behind other OECD countries, with targets for carbon-free output by 2030.
  • Despite ambitions, challenges such as power grid expansion and local conflicts could hinder the success of Lee’s AI and energy initiatives.

+

Get Details

Researchers Call for More Standards and Tests for AI Models


  • The rapid increase in AI usage has led to more harmful outcomes, including hate speech and copyright infringements, exacerbated by insufficient regulations and testing.
  • Current research indicates that achieving desired behavior in AI models remains challenging, with limited progress over the past 15 years in understanding these complexities.
  • Red teaming, involving rigorous testing by external experts, is advocated to better evaluate AI risks, but there is a shortage of personnel in this field.
  • Project Moonshot seeks to improve AI evaluation through a toolkit that incorporates benchmarking and continuous assessment, with aims for customization in various industries.
  • Experts emphasize the need for stricter evaluation standards for AI, akin to those in pharmaceuticals, to prevent misuse and ensure safety before models are deployed.

+

Get Details

Newsom’s AI Report May Impact Legislation – Pasadena Now


  • The Dreamforce conference, touted as the largest AI event, took place in San Francisco on September 18, 2024.
  • California Governor Gavin Newsom vetoed a key AI regulation, prompting a request for recommendations on balancing safety and innovation.
  • The California Report on Frontier AI Policy emphasizes the need for transparency through regulations such as whistleblower protections and independent audits.
  • Recent evaluations highlight advanced AI capabilities, including independent actions and potential threats, stressing the importance of addressing emerging risks.
  • The California Legislature is considering new AI regulations, focusing on labeling AI-generated content and protocols for chatbot interactions, while the report encourages coordinated governance to reduce business compliance burdens.

+

Get Details

AI Applications in Government


  • AI adoption in the public sector varies by agency and infrastructure reliance on legacy systems.
  • Defense and law enforcement agencies are actively utilizing AI for advanced technologies like computer vision and deep neural networks.
  • AI assists in uncovering suspicious activities and improving operational efficiency in mission-critical applications.
  • Civil and health agencies assess AI readiness and identify high-return use cases across various operational contexts.
  • Key AI applications include climate analysis, trade surveillance, research, and fraud detection.

+

Get Details

AI Search is Transforming SEO: Advisors Must Adapt


  • Google introduced AI Overviews in May 2024, leading to significant drops in website traffic for firms like District Capital Management, which saw visits fall from 31,800 to 16,500.
  • Traditional SEO strategies are becoming less effective due to the rise of zero-click search results and AI summaries, making credibility and high-quality content crucial.
  • Advisors are adapting by focusing on niche markets and using targeted strategies like localized search optimization, with positive results noted by firms like Iconoclastic Capital Management.
  • New approaches emphasize LLM engine optimization (LEO) and generative engine optimization (GEO), utilizing clear, user-focused content that AI can easily process.
  • Engagement through Q&A formats and niche-focused content remains valuable, as AI may reference such materials in recommendations even without direct human traffic to the articles.

+

Get Details

AI Models from Google, OpenAI, Anthropic Score 0% on ‘Hard’ Coding Problems


  • AI’s Limitations in Coding: Recent research highlights a significant gap between AI models and elite human coding abilities, particularly in complex problem-solving scenarios.
  • Benchmarking Challenges: Current coding benchmarks, like LiveCodeBench and SWE-Bench, are criticized for inconsistencies and not effectively isolating AI performance in algorithm design.
  • Introduction of LiveCodeBench Pro: A new evaluation standard was launched, featuring 584 problems from prestigious competitions, categorically annotated for difficulty, revealing AI’s struggles with ‘Hard’ problems (0% success).
  • Model Performance Insights: AI models excel at knowledge-heavy tasks but falter on observation-heavy problems requiring novel insights and complex reasoning, indicating room for substantial improvements.
  • Task Duration and Success Rates: Research suggests AI’s success in longer tasks decreases exponentially, necessitating shorter durations for reliable performance, with complex coding projects still uncertain in their feasibility.

+

Get Details

UK AI Trial Saves Civil Servants 2 Weeks a Year


  • Civil servants in the UK saved the equivalent of two weeks’ working time annually by using AI tools, specifically Microsoft’s Copilot, which increased efficiency in drafting documents and preparing reports.
  • The trial involved over 20,000 officials, who reported an average daily time saving of 26 minutes, with Copilot particularly effective in creating presentations and managing routine tasks.
  • 82% of civil servants expressed strong support for the continued use of AI, believing it allows them to provide more personalized support to citizens.
  • The UK government aims for £45 billion in public sector savings through digital transformation and is developing new AI tools like “Humphrey” to enhance productivity.
  • Despite benefits, concerns about AI include potential glitches, biased algorithms, and criticisms from human rights groups regarding predictive policing and copyright law relaxations.

+

Get Details

AI Energy Consumption: Silence from the Experts


  • Sam Altman, CEO of OpenAI, stated that an average ChatGPT query consumes 0.34 watt-hours of energy, comparable to brief usage of an oven or lightbulb.
  • Concerns arise regarding the lack of context for this figure, including how "average" queries are defined and if training and server cooling are factored into the energy use.
  • Experts, like Sasha Luccioni from Hugging Face, question the credibility of Altman’s energy estimate and the absence of detailed information from OpenAI.
  • Research highlights the urgent need for transparency in AI’s environmental impact, noting that 84% of large language model (LLM) usage has no environmental disclosures.
  • Discrepancies exist in energy consumption claims, such as ChatGPT requests allegedly using ten times more energy than Google searches, which lacks solid evidence and arises from unverified statements.

+

Get Details