ai

FBI Warning For All iPhone, Android Users—Hang Up Now, Use This Code

FBI warns iPhone and Android users about AI-powered deepfake scams. Users should hang up on suspicious calls and create a secret code for verification with close family to combat voice cloning threats. Social media poses risks as it provides voice samples for cybercriminals. Ongoing AI attacks are reshaping crime, making scams increasingly sophisticated and difficult to detect.

https://www.forbes.com/sites/daveywinder/2025/03/22/fbi-warns-iphone-and-android-users-hang-up-now-use-this-code/

Jailbreaking Is (mostly) Simpler Than You Think

Microsoft's blog discusses a straightforward jailbreak method, Context Compliance Attack (CCA), effective against many AI systems. CCA manipulates AI by exploiting reliance on client-supplied conversation history, allowing for context manipulation with minimal effort. Models maintaining conversation state, like Copilot and ChatGPT, are safe from this attack. Microsoft suggests enhancements like cryptographic signatures and server-side history to bolster AI safety. The implications of CCA stress the need for comprehensive security considerations in AI system designs, encouraging discussions on further mitigation strategies.

https://msrc.microsoft.com/blog/2025/03/jailbreaking-is-mostly-simpler-than-you-think/

New AI Protection From Google Cloud Tackles AI Risks, Threats, and Compliance

Google Cloud launched AI Protection, enhancing security for generative AI with capabilities to discover AI assets, secure them, and manage associated threats. It integrates with Google’s Security Command Center for comprehensive risk management and regulatory compliance. Key features include automatic inventory discovery, prompt injection prevention, and threat detection, providing a broader security platform to mitigate AI-related vulnerabilities.

https://www.securityweek.com/new-ai-protection-from-google-cloud-tackles-ai-risks-threats-and-compliance/

Nearly 12,000 API Keys and Passwords Found in AI Training Dataset

Nearly 12,000 API keys and passwords were discovered in the Common Crawl dataset used to train AI models, raising concerns about insecure coding practices. Researchers found 11,908 valid secrets after examining 400 terabytes of data from billions of web pages. Among these were AWS and MailChimp keys, often hardcoded into HTML and JavaScript. Vulnerabilities include potential misuse for phishing and data exfiltration. The study highlights the challenge of removing sensitive information from large datasets despite pre-processing efforts.

https://www.bleepingcomputer.com/news/security/nearly-12-000-api-keys-and-passwords-found-in-ai-training-dataset/

Google Chrome’s AI-powered Security Feature Rolls Out to Everyone

Google Chrome has launched an AI-enhanced security feature, updating its “Enhanced Protection” for real-time defense against harmful sites and downloads. This feature, part of Chrome's Safe Browsing, was in testing for three months and is now available on all platforms. Although it offers proactive protection, it sends browsing data to Google when enabled, which is off by default. Users can activate it through the settings on various devices.

https://www.bleepingcomputer.com/news/google/google-chromes-ai-powered-security-feature-rolls-out-to-everyone/

DeepSeek Coding Has the Capability to Transfer Users’ Data Directly to the Chinese Government

DeepSeek AI may secretly transfer U.S. user data to the Chinese government, raising national security concerns. Cybersecurity experts found embedded code suggesting direct links to Chinese-controlled servers, potentially exposing users' identities and online activities. This situation mirrors past worries over other Chinese tech companies, prompting calls for banning DeepSeek on government devices.

https://abcnews.go.com/US/deepseek-coding-capability-transfer-users-data-directly-chinese/story?id=118465451

Google: Over 57 Nation-State Threat Groups Using AI for Cyber Operations

Over 57 nation-state threat groups, including those from China, Iran, North Korea, and Russia, are using Google-powered AI, notably Gemini, for cyber operations. These groups primarily use AI for research, troubleshooting code, and creating content. Iranian group APT42 utilizes Gemini extensively for phishing and reconnaissance, while Chinese APTs leverage it for network infiltration tactics. Russian actors focus on converting malware, and North Koreans use it for job applications to infiltrate Western companies. Google highlights the urgent need for public-private cooperation to enhance cyber defenses.

https://thehackernews.com/2025/01/google-over-57-nation-state-threat.html

DeepSeek Exposes Database With Over 1 Million Chat Records

DeepSeek, a Chinese AI startup, exposed two unsecured databases with over 1 million plaintext chat records, API keys, and operational data. Discovered by Wiz Research during a security assessment, these databases allowed unauthorized access and SQL queries via a web interface. The exposure raises significant security concerns for DeepSeek and its users, as attackers could retrieve sensitive information and potentially exploit the company's internal systems. Wiz reported the issue, prompting DeepSeek to secure the databases promptly.

https://www.bleepingcomputer.com/news/security/deepseek-exposes-database-with-over-1-million-chat-records/

DeepSeek Halts New Signups Amid “large-scale” Cyberattack

DeepSeek suspends new registrations on its AI chat platform due to a “large-scale” cyberattack, believed to be a DDoS attack. The platform recently gained attention for outperforming US models, causing a sell-off in US stocks. Existing users can still log in, but cybersecurity researchers report vulnerabilities in DeepSeek's model that could enable malicious outputs.

https://www.bleepingcomputer.com/news/security/deepseek-halts-new-signups-amid-large-scale-cyberattack/

Employees Enter Sensitive Data Into GenAI Prompts Too Often

Employees often input sensitive data into generative AI (GenAI) tools, increasing risks for enterprises, as 8.5% of prompts analyzed contained sensitive information. The categories at risk include customer data (45.77%), employee data (27%), legal/finance (14.88%), and security codes (5.64%). Organizations face a dilemma: adopt GenAI for efficiency or risk exposing sensitive data. Effective governance strategies, such as real-time tracking and employee training, are crucial to mitigative risks while leveraging GenAI's advantages.

https://www.darkreading.com/threat-intelligence/employees-sensitive-data-genai-prompts

5 Key Cyber Security Trends for 2025

TLDR: In 2025, key cyber security trends include: 1) AI's role in cyber warfare and disinformation, 2) ransomware evolving into data exfiltration, 3) increased threats from infostealers targeting sensitive data, 4) vulnerabilities in edge devices as entry points for attacks, and 5) cloud security challenges due to misconfigurations. Organizations must adopt proactive risk management and unified security strategies to combat advanced threats.

https://blog.checkpoint.com/research/5-key-cyber-security-trends-for-2025/

Harnessing AI for Proactive Threat Intelligence and Advanced Cyber Defense

AI revolutionizes cybersecurity by enabling real-time threat detection, proactive defense, and enhanced data protection. It learns from data patterns, identifies potential threats before they manifest, and automates defense mechanisms to combat sophisticated attacks. Despite its advantages, ethical concerns and potential biases must be addressed. Key benefits include efficient incident management, better endpoint security, and continuous adaptation to emerging threats. Integrating AI with human expertise is vital for robust future cyber defense.

Harnessing AI for Proactive Threat Intelligence and Advanced Cyber Defense

AI-generated Phishing Emails Are Getting Very Good at Targeting Executives

AI-generated phishing emails are increasingly targeting corporate executives. Companies like Beazley and eBay report a rise in hyper-personalized scams using personal details gathered via AI analysis. Experts highlight that AI enables hackers to craft convincing phishing emails that bypass security measures. Phishing is the starting point for over 90% of cyberattacks, with the global cost of data breaches rising. AI's role in identifying vulnerabilities enhances the sophistication of these scams, making them more difficult to detect.

AI-generated phishing emails are getting very good at targeting executives

It’s Only a Matter of Time Before LLMs Jump Start Supply-chain Attacks

LLMs may enhance supply-chain attacks by aiding social engineering, particularly spear phishing. Criminals can exploit existing LLMs rather than creating their own, making attacks more feasible. In 2025, targeted scams based on personal data could rise significantly, as attackers craft convincing messages. Previous incidents, like the Change Healthcare ransomware attack, underscore the potential impacts. Security tools are emerging, but users must remain vigilant against phishing and voice cloning scams. Effective prevention includes careful scrutiny of emails and communications.

https://www.theregister.com/2024/12/29/llm_supply_chain_attacks/

With Open Source Artificial Intelligence, Don’t Forget the Lessons of Open Source Software

Open source AI raises innovation and security concerns, similar to past debates about open source software. CISA emphasizes learning from open source software security to promote responsible development of open foundation models while addressing potential harms. Key lessons include sustainability in contributions to open source ecosystems and prioritizing secure design and transparency in AI model development. CISA advocates for dual-use tools, acknowledging that while risks exist, the benefits for cybersecurity outweigh them. Ensuring safe, secure, and trustworthy AI models is crucial for fostering innovation.

With Open Source Artificial Intelligence, Don’t Forget the Lessons of Open Source Software

Scroll to Top