Social Media
X Warns Creators Of 90-Day Ban For Undisclosed AI War Videos, Mandates Labelling Of Synthetic Content
Social media platform X has introduced tougher rules targeting misleading artificial intelligence content, warning creators that posting AI-generated war videos without disclosure could result in a 90-day suspension from its revenue-sharing program.
The move comes as global tensions rise and AI-generated visuals related to ongoing geopolitical conflicts rapidly circulate online, raising concerns about misinformation and digital manipulation.
New Rules Aim to Curb Misleading Content
In a statement outlining the policy update, Nikita Bier said the platform is strengthening its content policies to ensure users receive reliable information, particularly during sensitive events such as military conflicts and political crises.
According to the company, recent advances in AI technology have made it easier than ever to create highly realistic fake videos, often referred to as deepfakes, that can mislead viewers and spread quickly across social networks.
Platform officials revealed that in one recent investigation, a single operator was found managing more than 30 accounts used to distribute AI-generated war footage. Some of these profiles had allegedly been hacked and repurposed to appear legitimate, increasing the credibility of the fabricated videos.
“Made with AI” Label Now Mandatory
Under the revised guidelines, any war-related video created using artificial intelligence must include a clear “Made with AI” label. The requirement is intended to alert viewers that the footage is synthetic rather than authentic battlefield footage.
Creators who fail to disclose AI-generated content will face a temporary ban from X’s creator monetization program for up to 90 days. Repeated violations could lead to permanent removal from revenue-sharing benefits, the company confirmed.
The platform said the transparency measure is designed to limit the spread of manipulated media and strengthen trust among users.
Growing Platform Traffic During Global Tensions
The announcement comes amid heightened interest in real-time updates surrounding the ongoing tensions involving the United States, Israel, and Iran. As international developments unfold, social media platforms have seen a significant spike in traffic from users seeking breaking news and firsthand footage.
At the same time, rumors and speculation circulating online — including claims involving Ali Khamenei — have triggered waves of unverified videos and misleading posts.
Experts say the increasing sophistication of AI-generated media makes it difficult for everyday users to distinguish between authentic footage and fabricated content.
Experts Warn About Deepfake Risks
Technology analysts warn that AI-powered deepfakes could have far-reaching consequences beyond misinformation. Fabricated war footage can influence public opinion, disrupt diplomatic relations, and even affect financial markets by spreading panic or confusion.
Industry observers see X’s latest policy change as a step toward improving accountability on digital platforms while addressing the growing challenge of AI-driven misinformation.
The company also indicated that it is exploring additional tools and automated systems to better detect and label synthetic media in the future.
As AI technology continues to evolve, platforms are increasingly balancing the need for fast information sharing with the responsibility to maintain accuracy and transparency online.
Child Safety
Countries Move to Restrict Facebook, Instagram for Children; India Enters Global Debate
Governments around the world are moving decisively to regulate children’s access to social media platforms such as Facebook and Instagram, citing growing evidence of harm to mental health, online safety risks, and addictive design practices. From sweeping bans to stricter age limits, a new wave of regulation is reshaping how states view the responsibilities of Big Tech toward young users.
India has now entered the global debate, with policymakers beginning consultations and closely tracking international developments to assess whether stronger safeguards are needed for children in the digital ecosystem.
Australia Sets a New Global Benchmark
Australia has emerged as a global trendsetter after passing landmark legislation in December 2024 that bans children under 16 from using social media platforms. Unlike earlier regulatory models, the Australian law places the burden of compliance squarely on technology companies, requiring them to deploy robust age-verification systems and actively prevent underage access.
Authorities said the decision followed mounting research linking excessive social media use among children and adolescents to anxiety, depression, sleep disruption, and self-harm. Since its adoption, the Australian framework has become a reference point for governments worldwide exploring similar measures.
France Acts as Europe Debates Age Thresholds
France has already enacted laws restricting social media access for children under 15, mandating parental consent and strengthening enforcement provisions. French lawmakers argued that voluntary safety tools offered by platforms had failed to protect minors from harmful content and addictive features.
Across the European Union, momentum is building. In November 2025, the European Parliament recommended setting 16 as the minimum age for social media use. Although non-binding, the recommendation has increased pressure on national governments to legislate.
Countries such as Denmark, Greece, Spain, and Ireland are now reviewing regulatory options, with legislators expressing concern that existing age limits — typically set at 13 by platforms — are widely ignored and easily circumvented.
UK Signals Tougher Action
In the United Kingdom, Prime Minister Keir Starmer has confirmed that his government is actively examining restrictions on children’s social media use. British officials have pointed to rising instances of cyberbullying, online exploitation, and youth mental health challenges as drivers behind the push for stricter controls.
While no legislation has yet been finalised, officials have indicated that protecting minors online is becoming a policy priority.
Malaysia Introduces Under-16 Ban
In Southeast Asia, Malaysia has announced a ban on social media access for users below 16 years of age. The move forms part of a broader effort to tighten online safety laws and reduce children’s exposure to harmful content and long-term psychological risks.
India Begins Early-Stage Consultations
India has not yet proposed a nationwide restriction, but the debate has gained traction. Goa has initiated consultations on potential age-based limits inspired by Australia’s model, with the state’s IT minister confirming on January 27 that options are being examined.
Officials emphasise that discussions are still at a preliminary stage, focused on balancing children’s safety with digital inclusion. However, global regulatory trends have prompted a reassessment of whether India’s current legal framework adequately protects minors online.
Social Media Platforms Face Mounting Scrutiny
Despite claiming to enforce a minimum age of 13, platforms owned by Meta and other tech companies continue to face criticism for weak age-verification systems. Regulators and child-rights advocates argue that self-declared ages and basic checks do little to prevent underage access.
Experts warn that algorithm-driven feeds, infinite scrolling, and engagement-optimised design amplify risks for children, reinforcing calls for stronger state intervention rather than voluntary compliance.
A Broader Shift in Global Digital Policy
The growing number of bans and restrictions signals a fundamental change in how governments view social media — no longer just communication tools, but powerful systems with deep social and psychological impacts.
As more countries move toward legislation or pilot regulations, pressure is increasing on technology companies to adapt their platforms and business models. Observers say the next phase will determine whether global standards emerge or whether companies face a patchwork of national rules.
With India now watching closely, the outcome of these international experiments may shape the country’s own approach to protecting children in an increasingly digital world.
Cybersecurity
Twitter Hacked: Data Of 400 mn Users Up For Sale, Sundar Pichai and Salman Khan On The List
NEW DELHI: Twitter faces a major security scare as a hacker claims to have accessed personal data of over 400 million users, including high-profile names like Sundar Pichai, CEO of Google, Bollywood actor Salman Khan, Donald Trump Jr., Steve Wozniak, and singer Charlie Puth. The hacker, known online as Ryushi, shared sample data to substantiate the claim.
The alleged breach reportedly includes emails and phone numbers, with the hacker demanding that Twitter or CEO Elon Musk purchase the data to avoid potential fines under the European Union’s General Data Protection Regulation (GDPR). Such fines could reportedly reach up to USD 276 million, similar to penalties faced by other tech companies for large-scale data leaks.
In a message posted online, Ryushi stated:
“Twitter or Elon Musk, if you are reading this, you are already risking a GDPR fine over the 5.4 million user breach. Imagine the fine for a 400 million user breach. Your best option to avoid paying $276 million is to buy this data exclusively.”
Cybersecurity experts have verified portions of the leaked data. Alon Gal, co-founder and chief technology officer at Israel-based cybercrime intelligence firm Hudson Rock, confirmed that the data checked by third parties appears genuine. According to Gal, the breach likely exploited a flaw in Twitter’s API, enabling the hacker to query any email or phone number and retrieve associated Twitter profiles.
This revelation comes in the wake of an ongoing investigation by the Irish Data Protection Commission (DPC) into a previous Twitter data leak affecting 5.4 million users. That earlier incident exposed email addresses, phone numbers, and Twitter handles, highlighting ongoing vulnerabilities in the platform’s data protection measures.
Twitter has yet to issue an official statement on the 400-million-user breach. The incident raises fresh concerns over the social media platform’s ability to safeguard sensitive user information and maintain compliance with global privacy regulations.
-
Business3 years agoPot Odor Does Not Justify Probable Cause for Vehicle Searches, Minnesota Court Affirms
-
Business2 years agoNew Mexico cannabis operator fined, loses license for alleged BioTrack fraud
-
Business2 years agoAlabama to make another attempt Dec. 1 to award medical cannabis licenses
-
Business3 years agoWashington State Pays Out $9.4 Million in Refunds Relating to Drug Convictions
-
Business2 years agoMarijuana companies suing US attorney general in federal prohibition challenge
-
Business3 years agoLegal Marijuana Handed A Nothing Burger From NY State
-
Business3 years agoCan Cannabis Help Seasonal Depression
-
Blogs3 years agoCannabis Art Is Flourishing On Etsy
