WhatsApp Uncovers Sophisticated Hacking Campaign Targeting Fewer Than 200 Users
WhatsApp has disclosed a highly targeted hacking campaign that compromised the devices of fewer than 200 individuals globally, raising fresh concerns about the security of even the most widely used encrypted messaging platforms. The attack leveraged a “zero-click” exploit, meaning it required no interaction from the victim to be effective.
The campaign, which ran over a 90-day period earlier this year, exploited a previously unknown vulnerability in WhatsApp that, when combined with flaws in Apple’s iOS and macOS operating systems, allowed attackers to install surveillance software on victims’ devices. The nature of the exploit enabled it to operate invisibly, giving hackers remote access to sensitive data such as messages, calls, and potentially even microphones and cameras.
WhatsApp has since released a security patch to close the vulnerability and is urging all users to update to the latest version of the app. Apple has also addressed the underlying issue in its recent system updates. While the total number of affected users remains small, the precision of the attack and the use of sophisticated tools indicate that it was likely orchestrated by a well-funded entity, possibly a state actor or private spyware vendor.
Early indicators suggest that the targets included members of civil society, such as journalists, activists, and lawyers—people often under threat from surveillance operations. Forensic support is being offered to potential victims to help assess the extent of the intrusion and mitigate further risk.
This latest breach highlights the growing threat of advanced spyware and the importance of vigilance, even on encrypted platforms. As attackers increasingly exploit zero-day vulnerabilities with surgical precision, tech companies are under pressure to strengthen real-time threat detection and accelerate security response measures. Users, meanwhile, are reminded to keep their devices and apps up to date to reduce exposure to emerging threats.
Meta Under Fire for Creating Flirty Celebrity Chatbots Without Permission
Meta, the parent company of Facebook and Instagram, is facing backlash after it was revealed that the company created AI-powered chatbots impersonating celebrities like Taylor Swift, Scarlett Johansson, and Selena Gomez—without their consent. These chatbots engaged users in flirtatious and sometimes sexually suggestive conversations, and in some cases generated photorealistic intimate images, sparking serious ethical and legal concerns.
Some of the chatbots were made by users of Meta’s AI tools, but at least three—including two featuring Taylor Swift—were developed internally by a Meta employee for product testing. These bots reportedly accumulated over 10 million interactions before being quietly removed shortly before the revelations became public.
Meta admitted that the creation of intimate images, especially involving adult celebrities and minors, was a violation of its policies. The company said it failed to enforce its own rules adequately. Meta’s policies prohibit nude or suggestive content and the impersonation of real individuals, especially when presented without clear labels or disclaimers. Yet, many of these chatbots lacked such transparency, blurring the line between fantasy and reality.
Legal experts warn this kind of unauthorized use likely breaches “right of publicity” laws, which protect people’s likenesses and names from commercial misuse without permission. Beyond legal risks, industry observers express concern about the potential harm to celebrities, as these AI chatbots could fuel unhealthy obsessions and confusion among users.
Meta has since removed the offending chatbots and pledged to strengthen its AI guidelines to prevent similar incidents. This episode underscores the growing challenge tech companies face in responsibly managing AI-generated content, especially when it involves real-world individuals and sensitive material. The incident has sparked a broader debate about the ethics of AI, celebrity rights, and the limits of digital impersonation.
Intel Amends CHIPS Act Deal, Secures $5.7 Billion Early Payment from US Commerce Department
Intel has announced a significant amendment to its agreement under the U.S. CHIPS Act, allowing the company to receive an early payment of $5.7 billion from the U.S. Commerce Department. This move accelerates funding intended to support Intel’s ambitious semiconductor manufacturing expansion across the United States.
The CHIPS Act, designed to boost domestic chip production and reduce reliance on foreign suppliers, allocates substantial federal funds to companies investing in advanced semiconductor facilities. Intel, one of the biggest beneficiaries, has been planning large-scale investments in new fabs, particularly in Ohio and Arizona, aiming to strengthen the U.S. semiconductor supply chain.

By revising the terms of its deal, Intel gains quicker access to a substantial portion of the allocated funds. The early payment is expected to fast-track construction and equipment procurement, helping Intel maintain momentum in ramping up chip production amid ongoing global shortages and geopolitical tensions.
The amendment reflects close cooperation between Intel and the Commerce Department to ensure funds are deployed efficiently. Intel executives have expressed optimism that this accelerated capital infusion will bolster innovation, job creation, and U.S. leadership in semiconductor technology.
This development comes as the chip industry faces intense pressure to innovate while managing supply chain risks. Intel’s faster access to government support positions it to better compete with rivals such as TSMC and Samsung, who are also ramping up domestic and international manufacturing capacity.
Overall, Intel’s amended CHIPS Act deal marks a crucial step in the broader U.S. strategy to revitalize the semiconductor sector and secure critical technology infrastructure for the future.
Google Set to Face Modest EU Antitrust Fine in Adtech Investigation
Google is expected to receive a modest antitrust fine from the European Union following a lengthy investigation into its advertising technology practices. The probe, which lasted around four years, focused on allegations that Google unfairly favored its own advertising services over competitors, potentially disadvantaging advertisers and publishers across the region.
This anticipated fine marks a shift in the EU’s enforcement approach, with regulators opting for behavioral remedies and smaller penalties rather than large-scale fines or forced divestitures. Unlike earlier cases where Google faced multi-billion euro penalties and demands to break up parts of its business, this time the EU appears to be seeking less severe sanctions while encouraging changes in how Google operates its adtech platforms.

The investigation centered on Google’s dominance in key advertising tools and services, including its exchanges and publisher products. Critics argue that Google’s control over these platforms gives it an unfair advantage, limiting competition and innovation in the adtech ecosystem. Google, however, maintains that its systems provide advertisers and publishers with valuable choices and that its conduct complies with existing regulations.
While the exact amount of the fine has not been disclosed, it is expected to be significantly lower than previous antitrust penalties imposed on Google by EU authorities. The company’s advertising business remains a massive revenue generator, accounting for a substantial majority of its total income.
Separately, Google is also facing regulatory scrutiny under new EU rules that could lead to more severe penalties if found guilty of favoring its own services in ways that harm competition. This ongoing legal pressure underscores the challenges global tech giants face as regulators worldwide seek to rein in their market power.
The outcome of this latest antitrust action will have important implications for Google’s ad business in Europe and could shape future regulatory strategies around digital advertising and platform fairness.








