Connect with us

Business News

CBI Traps CGST Officer Red-Handed Taking ₹5 Lakh Bribe to Bury ₹98 Lakh Tax Demand

Published

on

Mumbai, December 28, 2025 – The Central Bureau of Investigation (CBI) has arrested a Mumbai CGST superintendent for allegedly accepting a ₹5 lakh bribe to suppress a fabricated ₹98 lakh tax demand against a private company. The officer, Ankit Aggarwal of CGST Audit-1 Mumbai, was caught red-handed during a sting operation initiated after the firm’s director lodged a complaint.

Bribe Demand and Arrest

According to the CBI, Aggarwal had audited the company on November 26 and threatened to impose a massive tax liability unless he received a payoff. The officer initially demanded ₹20 lakh but eventually agreed to ₹17 lakh as a “settlement.” The CBI intervened and caught him accepting the first installment of ₹5 lakh.

Raid Reveals Hidden Assets

Following the arrest, the CBI conducted searches at Aggarwal’s premises, uncovering significant unaccounted assets:

  • ₹18.30 lakh in cash
  • Property documents totaling ₹72.41 lakh, including deeds from April 2025 (₹40.31 lakh) and June 2024 (₹32.10 lakh)
  • Digital files suggesting manipulated audit reports linked to the complainant company

These findings reinforce the CBI’s claim that the bribe was part of a larger scheme to exploit post-audit authority.

Pattern of Tax Bribery

Investigators highlighted that such coercion is increasingly common in GST audits, where officials inflate tax demands to extract illegal payments. The case has been registered under the Prevention of Corruption Act, and a broader probe is underway to identify potential accomplices.

Rising Concerns Over Mumbai’s Tax Corruption

Mumbai has emerged as a hotspot for tax-related corruption, with 15 CGST officials arrested in 2025 for similar scams involving fake refunds and input tax credit frauds. Reports indicate that officials often demand 10–20% of “settled” liabilities, sometimes routed through hawala or cryptocurrency channels.

Guidance for Businesses

Companies are advised to remain vigilant:

  • Verify sudden large tax demands via the GST portal
  • Report “settlement offers” to the CBI at cbi.gov.in/complaint
  • Address delays or refund holds through gst.gov.in
  • CBI Anti-Corruption Hotline: 011-2436-7800

The CBI has emphasized a zero-tolerance approach toward corruption in tax departments, aiming to curb the growing misuse of audit authority for personal gain.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

Billionaire at 22: Indian-origin Surya Midha Breaks Mark Zuckerberg’s Record

Published

on

By

Indian-origin entrepreneur Surya Midha, at just 22, has achieved a historic milestone by becoming the world’s youngest self-made billionaire, surpassing the record previously held by Mark Zuckerberg. The announcement comes amid a surge in artificial intelligence–driven startups reshaping the global technology landscape.

Billionaire Status at 22

The international business magazine Forbes listed Midha among the world’s billionaires, estimating his net worth at $2.2 billion (around ₹18,000 crore). Midha co-founded Mercor, an AI-powered recruitment platform that has quickly gained recognition for its innovative approach to talent acquisition.

Mark Zuckerberg became a billionaire at 23, making Midha’s achievement a landmark in entrepreneurial history.

Mercor: AI Revolutionizing Recruitment

Mercor leverages artificial intelligence to automate and streamline hiring processes. The platform conducts interviews using AI avatars, evaluating candidates’ skills, experience, and responses to help companies make faster and more accurate hiring decisions. Several major tech firms and AI research labs in Silicon Valley have reportedly adopted the platform.

Rapid Growth and Company Valuation

Driven by growing demand in the AI sector, Mercor was valued at nearly $10 billion (approximately ₹83,000 crore) last year. Experts suggest that AI-driven recruitment and talent management will continue to expand, creating opportunities for early entrants in this emerging industry.

Indian Roots and Early Achievements

Born in San Jose, California, Midha comes from an Indian-origin family that moved from Delhi to the United States. He excelled academically and in extracurricular activities, including winning national debate championships during his high school years.

Midha pursued higher education in foreign studies at Georgetown University, where he met his co-founders, Brendan Foody and Adarsh Hiremath, who together developed the AI recruitment platform.

AI Driving a New Generation of Young Entrepreneurs

Forbes notes that artificial intelligence is fueling a wave of young entrepreneurs entering the billionaire ranks. Sectors such as AI, automation, and data science are creating new avenues for rapid innovation and financial success.

Surya Midha’s achievement symbolizes this technological shift, illustrating how emerging AI technologies can empower a new generation of innovators to build globally influential companies at unprecedented speed.

Continue Reading

Business News

US Military Used Anthropic AI Despite Trump Ban, Raising National Security Policy Concerns

Published

on

By

The controversy over the use of artificial intelligence in U.S. national security operations has escalated after multiple reports indicated that Anthropic’s AI model Claude was deployed during U.S. military operations in the Middle East just hours after President Donald Trump issued a directive restricting federal agencies from using the company’s technology.

The Wall Street Journal and other outlets reported that the U.S. Central Command (CENTCOM) used Claude to support intelligence assessments, target identification, and battlefield simulations during coordinated airstrikes on Iranian targets — even as political leaders were distancing federal agencies from the company’s systems.

Background: Federal Ban and Supply Chain Risk Designation

Late last month, the Trump administration ordered all federal agencies to discontinue use of Anthropic’s AI tools, including Claude, citing national security concerns and labeling the company a supply chain risk after disagreements over how its models could be used in military and surveillance contexts.

Defense Secretary Pete Hegseth said that unrestricted access to AI technology was necessary for essential military applications — but Anthropic’s refusal to grant such access without safeguards prompted the supply chain risk label.

Six‑Month Phase‑Out Period

Although the federal directive formally prohibits further use of Anthropic technology, government guidance included a six‑month transition period for agencies to shift off Claude while maintaining operational capability. This timeline appears to explain the military’s continued use of Claude for the Iran operation.

Experts note that Claude had been deeply integrated into classified military networks — partly due to its earlier partnerships and Pentagon contracts — making an immediate cut‑off operationally complex.

Operational and Policy Implications

The deployment of Claude in an active combat environment has triggered fresh debate over the role of private AI companies in national defense. Critics argue that reliance on commercial systems highlights gaps in policy alignment between civilian regulatory action and military needs, while others warn that such AI use raises ethical, legal, and accountability challenges in warfare planning.

Some technology policy analysts say this episode highlights a broader issue: the rapid integration of AI into defense operations is outpacing the development of coherent, consistent policy frameworks.

Future of Military AI Strategy

In response to the rift with Anthropic, rival AI providers have reportedly moved to fill the gap. OpenAI has been named as a Pentagon partner for ongoing classified AI support, with its models slated to replace Claude over time as military systems transition.

Defense planners now face a pivotal question: how to balance national security imperatives, ethical limits on AI use in lethal operations, and the strategic risks of over‑dependence on private technology firms.

As this policy and technology dispute continues to unfold, it is likely to influence broader debates on AI governance, national security strategy, and the future role of privately developed AI systems in public sector missions.

Continue Reading

Business News

AI Safety Debate Intensifies As Musk And Altman Face Off Again In Legal Battle

Published

on

By

The debate over artificial intelligence safety and ethical responsibility has reignited as Elon Musk, CEO of Tesla and owner of X, and Sam Altman, CEO of OpenAI, faced off in court over the regulation and risks of AI technology. The ongoing legal dispute now spotlights not only company accountability but broader concerns about user safety and regulatory oversight.

Musk Pushes Safety-First AI Approach

During the hearing, Musk emphasized that AI development must prioritize human safety. He cited his company’s AI system, Grok, noting that no suicide-related incidents have been linked to its use. Musk also raised concerns—though unverified—regarding potential mental health risks associated with OpenAI’s ChatGPT.

Musk argued that rapidly advancing AI systems lacking rigorous safety protocols could pose future societal risks. He stressed that AI must be evaluated not only for technological innovation but also for its impact on human welfare.

OpenAI Defends Its Safety Measures

OpenAI countered by affirming its ongoing efforts to strengthen the safety and reliability of its platforms. The company emphasized that systems like ChatGPT are designed to provide information, assist productivity, and enhance decision-making. OpenAI also cautioned against attributing complex incidents, such as suicides, directly to AI, noting that multiple social and personal factors contribute to such outcomes.

Implications for AI Policy

Legal analysts suggest that the case may set a precedent for AI governance beyond the two companies involved. With generative AI technologies increasingly embedded in education, healthcare, business, and communication, courts and policymakers are under pressure to define clearer safety standards, accountability measures, and data protection requirements.

Experts say the dispute underscores a broader challenge: balancing AI innovation with ethical responsibility, mental health considerations, and user protection. The outcome could influence not only corporate AI policies but also future regulatory frameworks in the United States and potentially internationally.

As the proceedings continue, the tech industry is closely monitoring the case, recognizing it as a defining moment for AI safety, ethical responsibility, and the governance of emerging technologies.

Continue Reading

Trending

Copyright © 2022 420 Reports Marijuana News & Information Website | Reefer News | Cannabis News