Study: AI Mimics Human Thought Processes


  • Chinese researchers discovered that AI models like ChatGPT process information similarly to the human mind, indicating fundamental similarities in conceptual knowledge.
  • The study involved LLMs performing “odd-one-out” trials, revealing that AI can categorize objects in ways akin to human cognition, particularly in language grouping.
  • AI generated 66 conceptual dimensions to sort objects and showed strong alignment with human neural activity patterns, suggesting human-like conceptual representations.
  • However, AI struggles with visual categorization and deeper cognitive tasks like analogical thinking, raising questions about their understanding of object significance and emotional value.
  • The researchers aim to develop more human-like artificial cognitive systems for better collaboration between AI and humans.

+

Get Details

Shadow AI in Government


  • Widespread AI Adoption: Over half of public-sector employees use AI daily, with varying availability across state and federal agencies, raising concerns about unregulated use of "shadow AI."

  • Procurement Challenges: Low-cost AI tools often escape traditional budget oversight due to small transaction amounts, complicating procurement processes in government.

  • Risks of Unapproved Tools: Employees frequently use unauthorized AI tools to circumvent slow purchasing and strict IT rules, leading to potential exposure of sensitive data.

  • Integration Issues: AI embedded in everyday software can create confusion about approved tools, increasing the risk of data breaches and compliance challenges.

  • Recommendations for Mitigation: Enhance visibility into AI use, streamline procurement processes, establish clear policies, and educate the workforce on permitted AI use to foster innovation while managing risks.

+

Get Details

NY Lawmakers Pass Bill Addressing Frontier AI Risks


  • RAISE Act Overview: New York’s Responsible AI Safety and Education Act aims to impose strict safety protocols on labs investing over $100 million in frontier AI models.

  • Safety Measures: The act mandates public safety protocols, third-party audits, and 72-hour incident reporting for AI incidents involving significant harm.

  • Critical Harm Definition: "Critical harm" is defined as an event causing at least 100 deaths or over 1billionindamages,withcivilpenaltiesforviolationsreachingupto30 million.

  • Whistleblower Protections: New anti-retaliation clauses safeguard employees who report substantial risks of critical harm, aiming to encourage transparency within AI labs.

  • Regulatory Landscape: The RAISE Act sets a precedent amid fragmented regulations across states and a slow federal response, positioning New York as a leader in AI policy.

+

Get Details

Concerns Grow Over UK’s Humphrey AI Tool and Big Tech Reliance


  • The UK government’s AI tool, Humphrey, utilizes models from OpenAI, Anthropic, and Google, prompting concerns over reliance on big tech for civil service reform.
  • All officials in England and Wales are set to receive training on Humphrey, aiming to enhance public sector efficiency without overarching commercial agreements with tech companies.
  • Critics raise alarms about AI’s use of copyrighted material, with public figures from the creative sector advocating for stronger protections against unauthorized use.
  • The government plans to maintain transparency and evaluate the accuracy of AI tools while managing costs, which are expected to decrease as AI models improve.
  • Officials assert that integrating AI tools like Humphrey will not hinder their regulatory capabilities, emphasizing the potential of AI to streamline administrative tasks in public services.

+

Get Details

Strong Data Strategy Crucial for Boosting Gen AI Returns


  • IBM predicts significant ROI from AI and generative AI investments within 18-24 months, contingent on a solid data strategy.
  • Successful AI adoption requires streamlining data, establishing AI policies, reskilling employees, and promoting cultural transformation.
  • 80% of AI implementation time is spent on data preparation, with a strong foundation being vital for effective AI use.
  • A phased approach is crucial for enterprises, starting with data collection and quality, followed by ETL processes, then reporting integration and governance.
  • Effective AI scaling relies on people and processes, emphasizing the necessity for strong data foundations to ensure meaningful business value from AI initiatives.

+

Get Details

AI Boosts Pagaya (PGY) Stock to Multi-Year Highs


  • AI-Driven Growth: Pagaya Technologies (PGY) utilizes AI to revolutionize the lending sector, achieving significant stock growth and profitability ahead of expectations.

  • Smart Middleman Model: The company enhances lending decisions for financial institutions without taking on credit risk, operating through a fee-based revenue structure.

  • Network Effect: Pagaya has established partnerships with over 31 lenders, improving its algorithms and creating competitive advantages through increased data intelligence.

  • Impressive Financials: In Q1 2025, Pagaya achieved its first positive net income of 8millionandprojectedongoingprofitgrowth,withestimatedrevenuebetween1.175 billion and $1.3 billion.

  • Investment Potential: Despite strong fundamentals and a low valuation compared to peers, the stock is highly volatile, appealing to investors looking for exposure to AI advancements and fintech transformation.

+

Get Details

Publishers Warn EU AI Rules Could Impact Licensing Market | MLex


  • Angela Mills Wade warns that weak AI regulations may harm the licensing market between rights holders and AI developers.
  • At a summit in Greece, Jakob Kucharczyk from OpenAI emphasized the company’s partnerships with news publishers to prevent copyright misuse.
  • The EU is finalizing its AI code of practice and reviewing its Copyright Directive, which may impact licensing, transparency, and liability policies.
  • OpenAI’s European intellectual property lead defended the company’s copyright safeguards amid concerns from publishers’ associations.
  • MLex offers regulatory change insights with expert analysis, daily newsletters, custom alerts, and curated case files to help businesses navigate risks.

+

Get Details

China’s AI Race Victory: What If?


  • AI Competition Urgency: U.S. leaders highlight the necessity of maintaining a competitive edge in AI against China, warning of the consequences of failing to innovate quickly and effectively.

  • Two-Pronged Strategy: The U.S. employs a strategy of restricting Chinese tech exports while fostering domestic AI innovation through light regulation and significant investment in infrastructure.

  • Narrowing Gap: China is rapidly closing the performance gap in AI with significant government investments and strategic initiatives, challenging the former U.S. dominance in the field.

  • Planning for Coexistence: The U.S. must prepare for scenarios where competing AI ecosystems exist while exploring ways to demonstrate the advantages of U.S. models and easing migration costs for users.

  • Addressing Risks: Policymakers should implement safeguards and develop adjudication systems to mitigate risks associated with using foreign AI models while considering data-sharing protocols that balance security with potential benefits.

+

Get Details

AI and the Rise of Government Scams


  • Government imposter scams are rising, with losses exceeding $400 million last year.
  • Reports of scams targeting agencies like DMVs and tax offices increased, with victim reports rising over 50% from 2022 to 2024.
  • Scammers utilize realistic communication methods, including spoofed “.gov” emails, to demand payments for fictitious obligations.
  • The use of generative AI tools has made it easier for fraudsters to convincingly impersonate officials.
  • Local governments are urged to enhance public awareness campaigns and implement stronger digital security measures to protect residents.

+

Get Details

MindHYVE.ai and DV8 Infosystems Partner with KICTANet to Develop Kenya’s AI Policy


  • Partnership Announcement: MindHYVE.ai, DV8 Infosystems, and KICTANet signed a Memorandum of Understanding (MoU) to co-develop Kenya’s National Artificial Intelligence Policy.

  • Strategic Objectives: The collaboration aims to enhance Kenya’s National AI Strategy for 2025-2030, focusing on ethical and regulatory design, participatory engagement, and a publication-ready policy framework.

  • Commitment to Ethical AI: The MoU underscores a shared commitment to responsible AI governance, positioning Kenya as a leader in developing ethical and inclusive AI policies.

  • Global Impact: This initiative signifies a new phase in Kenya’s digital transformation, aiming to establish a sovereign AI governance model that influences global AI policy discourse.

  • Leadership Statements: Leaders from the partnering organizations emphasized the importance of the MoU in shaping responsible AI practices and expressed their commitment to integrating diverse stakeholders in the policy development process.

+

Get Details