Israeli firm Black Cube confirms Cyprus sting to ‘expose corruption’
Israeli private intelligence firm Black Cube has confirmed its involvement in an undercover sting operation in Cyprus, saying the effort was aimed at exposing alleged corruption among political and business figures.
The firm acknowledged producing a covertly recorded video that surfaced earlier this year and quickly sparked controversy across the country. The footage reportedly features individuals discussing investment opportunities, political access, and financial contributions, raising concerns about possible unethical practices within the system. Those seen in the video have denied any wrongdoing.
The revelations led to political fallout, including the resignation of a senior aide linked to President Nikos Christodoulides. The development intensified scrutiny over governance standards and transparency in the country’s political environment.
In a public statement, Black Cube said it was “proud” of its role in uncovering what it described as corrupt practices, adding that its work contributes to improving accountability and fostering a fair business climate. The firm also stated that it is cooperating with authorities investigating the matter.

However, Black Cube has not disclosed who commissioned the operation, prompting speculation about the motivations behind the sting. The lack of clarity has fueled debate over the growing involvement of private intelligence firms in political and corporate affairs.
Authorities in Cyprus are now examining both the allegations raised in the video and the methods used to obtain the recordings. The case has drawn attention to the ethical and legal implications of covert surveillance conducted by private entities.
Founded by former intelligence officers, Black Cube has been involved in several high-profile operations globally. The Cyprus episode adds to ongoing discussions about the influence of private espionage in democratic systems and the fine line between investigation and intrusion.
OpenAI Identifies Security Issue in Third-Party Tool, Says No User Data Was Accessed
OpenAI has disclosed a security issue linked to a third-party tool used within its platform, assuring users that no data was accessed or compromised during the incident.
According to the company, the issue was identified through its internal monitoring systems and was traced to an external vendor service rather than OpenAI’s own infrastructure. The vulnerability was swiftly contained, and mitigation measures were implemented to prevent any potential misuse.
OpenAI stated that a detailed investigation found no evidence of unauthorized access to user data. The company emphasized that its core systems remain secure and that the incident did not impact the confidentiality or integrity of user information.
While the exact nature of the vulnerability and the third-party tool involved have not been publicly disclosed, OpenAI confirmed that it is conducting a comprehensive review of its external integrations. The company is also working closely with the vendor to strengthen safeguards and ensure adherence to its security protocols.

The incident underscores the growing risks associated with third-party dependencies in modern technology ecosystems. Even when a company’s internal systems are secure, vulnerabilities in external tools can present potential exposure points.
OpenAI reiterated that users are not required to take any action at this time. It also reaffirmed its commitment to transparency and security, noting that it will continue to monitor the situation and provide updates if necessary.
The disclosure comes at a time when the artificial intelligence industry faces increasing scrutiny over data protection and platform reliability, with companies prioritizing stronger defenses against emerging cybersecurity threats.
Meta Must Face Youth Addiction Lawsuit by Massachusetts, Court Rules
Meta Platforms must face a lawsuit filed by the state of Massachusetts alleging that its social media platforms were deliberately designed to be addictive for young users, after a court ruled the case can proceed.
The ruling by the Massachusetts Supreme Judicial Court marks a significant development in ongoing legal challenges against major technology firms. The court rejected Meta’s argument that it is protected under Section 230 of the Communications Decency Act, which generally shields online platforms from liability over user-generated content. Instead, the court held that the claims focus on Meta’s own design choices and business practices, allowing the lawsuit to move forward.

Massachusetts Attorney General Andrea Joy Campbell has accused the company of intentionally incorporating features on platforms such as Instagram that encourage excessive use among children and teenagers. These features include endless scrolling, push notifications, and visible “like” counts, which the lawsuit argues are designed to maximize engagement and keep younger users online for longer periods.
Meta has denied the allegations, stating that it has introduced numerous tools and safeguards aimed at protecting young users and supporting their well-being. The company also said it will continue to defend itself as the case progresses through the courts.
The decision is part of a broader wave of legal scrutiny facing social media companies in the United States, where regulators and lawmakers are increasingly focused on the potential mental health impacts of digital platforms on younger audiences.
Legal experts say the case could have far-reaching implications, particularly in defining the extent to which technology companies can be held accountable for the design of their products. As the lawsuit proceeds, it is expected to play a key role in shaping future debates around platform responsibility and online safety.
Japan Approves Additional $4 Billion for Chipmaker Rapidus
The Japanese government has approved an additional $4 billion in funding for Rapidus, reinforcing its push to rebuild domestic semiconductor manufacturing and strengthen technological self-reliance.
The fresh investment brings total public support for Rapidus to well over 2 trillion yen, highlighting Japan’s determination to re-establish itself as a major player in the global chip industry. The funding will primarily support research, development, and the construction of advanced fabrication facilities aimed at producing next-generation semiconductors.

Founded in 2022 with backing from the government and leading Japanese corporations, Rapidus is focused on developing cutting-edge 2-nanometre chips. These advanced semiconductors are expected to play a crucial role in emerging technologies such as artificial intelligence, high-performance computing, and data infrastructure. The company has set an ambitious target to begin mass production by 2027.
Japan’s latest move comes amid intensifying global competition in the semiconductor sector, with countries investing heavily to secure supply chains and reduce dependence on foreign manufacturers. Policymakers in Japan view semiconductors as a strategic industry essential to economic security and technological leadership.
The government is also encouraging collaboration between Rapidus and global technology partners to accelerate innovation and bridge gaps in expertise. These partnerships are expected to help the company compete with established industry leaders in the United States, Taiwan, and South Korea.
Analysts note that while Rapidus faces significant challenges in catching up with dominant chipmakers, sustained financial backing and a coordinated national strategy could provide a strong foundation for growth.
The funding decision signals Japan’s long-term commitment to revitalizing its semiconductor ecosystem and securing a foothold in one of the world’s most critical and competitive industries.
Tesla’s Supervised Self-Driving Software Gets Dutch Approval, First in Europe
Tesla has secured regulatory approval in the Netherlands for its “Full Self-Driving (Supervised)” software, marking the first authorization of the system in Europe.
The approval, granted by Dutch vehicle authority RDW, allows Tesla to deploy its advanced driver-assistance technology on public roads under specific conditions. The move represents a significant milestone for the company as it seeks to expand its autonomous driving capabilities beyond the United States.
Despite its branding, the system does not enable fully autonomous driving. It is classified as a Level 2 driver-assistance feature, meaning the vehicle can perform tasks such as steering, braking, and acceleration, but requires constant human supervision. Drivers must remain attentive at all times and be ready to take control when necessary.
![]()
The Netherlands has become the first European country to approve the technology, reflecting its relatively progressive regulatory approach toward innovation in mobility. The decision follows extensive testing and evaluation to ensure compliance with stringent European safety standards.
For Tesla, the approval is expected to serve as a gateway to broader adoption across the European Union. While the authorization currently applies only within the Netherlands, it could influence regulators in other member states to consider similar approvals in the future.
The development comes amid increasing competition in the autonomous driving space, with automakers and technology firms racing to refine self-driving systems. European regulators, however, have maintained a cautious stance, prioritizing safety and accountability.
Tesla has indicated that it will begin rolling out the feature to eligible customers in the Netherlands soon. The company continues to emphasize that its system is designed to assist drivers rather than replace them, even as it advances toward more sophisticated levels of automation.
The approval marks a key step in Tesla’s efforts to scale its self-driving technology globally.









