Business News
Kanpur Oil Scam: 35 Tons of Oil Ordered, Criminals Send Water, Rs 34 Lakh Stolen
A significant fraud case has come to light in Kanpur, where a trader who ordered 35 tons of used edible oil was instead delivered water, resulting in a financial loss of Rs 34 lakh. Police at Juhi Station have registered an FIR, and investigations are ongoing to track down the perpetrators.
Fraud Orchestrated via Social Media Connections
The victim, Anil Kumar Dixit, manager at Navgrah Edible Oils in Yashodanagar, connected with alleged agents Nitin and Suresh Adukia from Juhi through a social media acquaintance. The scammers claimed to act as commission agents for suppliers of used edible oil and fatty acids, offering below-market rates.
In June 2025, Dixit’s firm placed an order at Rs 92.5 per kg, sending driver Pira Ram to collect the shipment. On October 31, Rs 34 lakh was transferred to the bank account provided by the fraudsters.
Fake Documents Used to Build Trust
The criminals sent falsified weight slips and e-way bills via WhatsApp, deceiving the trader. Suspecting inconsistencies, the driver arranged a separate weighment, revealing only 24 tons of material. Laboratory tests in Kanpur later confirmed that the delivered substance was water, not oil.
Juhi police confirmed that the FIR has been filed, and a manhunt for the accused is underway.
Growing Trend of Digital Trade Frauds
Experts say the Kanpur case reflects evolving methods of trade fraud, where scammers exploit social media and digital platforms to manipulate even experienced traders. Large transactions without prior verification of goods, payments, and delivery are increasingly targeted.
Rising Fraud Cases in Kanpur
The region has seen multiple high-profile scams in recent years:
- Gold and bullion traders losing millions through WhatsApp scams
- Confiscation of counterfeit edible oils and engine oils
- Interstate networks distributing adulterated petrol and diesel uncovered by STF
These incidents indicate that both large and small traders are vulnerable to sophisticated fraud schemes.
Police and Administrative Response
Authorities have urged traders to exercise caution in online transactions and to verify delivery, payments, and product quality before finalizing high-value deals. The police are actively pursuing the culprits involved in this latest Kanpur scam.
Artificial Intelligence
Billionaire at 22: Indian-origin Surya Midha Breaks Mark Zuckerberg’s Record
Indian-origin entrepreneur Surya Midha, at just 22, has achieved a historic milestone by becoming the world’s youngest self-made billionaire, surpassing the record previously held by Mark Zuckerberg. The announcement comes amid a surge in artificial intelligence–driven startups reshaping the global technology landscape.
Billionaire Status at 22
The international business magazine Forbes listed Midha among the world’s billionaires, estimating his net worth at $2.2 billion (around ₹18,000 crore). Midha co-founded Mercor, an AI-powered recruitment platform that has quickly gained recognition for its innovative approach to talent acquisition.
Mark Zuckerberg became a billionaire at 23, making Midha’s achievement a landmark in entrepreneurial history.
Mercor: AI Revolutionizing Recruitment
Mercor leverages artificial intelligence to automate and streamline hiring processes. The platform conducts interviews using AI avatars, evaluating candidates’ skills, experience, and responses to help companies make faster and more accurate hiring decisions. Several major tech firms and AI research labs in Silicon Valley have reportedly adopted the platform.
Rapid Growth and Company Valuation
Driven by growing demand in the AI sector, Mercor was valued at nearly $10 billion (approximately ₹83,000 crore) last year. Experts suggest that AI-driven recruitment and talent management will continue to expand, creating opportunities for early entrants in this emerging industry.
Indian Roots and Early Achievements
Born in San Jose, California, Midha comes from an Indian-origin family that moved from Delhi to the United States. He excelled academically and in extracurricular activities, including winning national debate championships during his high school years.
Midha pursued higher education in foreign studies at Georgetown University, where he met his co-founders, Brendan Foody and Adarsh Hiremath, who together developed the AI recruitment platform.
AI Driving a New Generation of Young Entrepreneurs
Forbes notes that artificial intelligence is fueling a wave of young entrepreneurs entering the billionaire ranks. Sectors such as AI, automation, and data science are creating new avenues for rapid innovation and financial success.
Surya Midha’s achievement symbolizes this technological shift, illustrating how emerging AI technologies can empower a new generation of innovators to build globally influential companies at unprecedented speed.
Business News
US Military Used Anthropic AI Despite Trump Ban, Raising National Security Policy Concerns
The controversy over the use of artificial intelligence in U.S. national security operations has escalated after multiple reports indicated that Anthropic’s AI model Claude was deployed during U.S. military operations in the Middle East just hours after President Donald Trump issued a directive restricting federal agencies from using the company’s technology.
The Wall Street Journal and other outlets reported that the U.S. Central Command (CENTCOM) used Claude to support intelligence assessments, target identification, and battlefield simulations during coordinated airstrikes on Iranian targets — even as political leaders were distancing federal agencies from the company’s systems.
Background: Federal Ban and Supply Chain Risk Designation
Late last month, the Trump administration ordered all federal agencies to discontinue use of Anthropic’s AI tools, including Claude, citing national security concerns and labeling the company a supply chain risk after disagreements over how its models could be used in military and surveillance contexts.
Defense Secretary Pete Hegseth said that unrestricted access to AI technology was necessary for essential military applications — but Anthropic’s refusal to grant such access without safeguards prompted the supply chain risk label.
Six‑Month Phase‑Out Period
Although the federal directive formally prohibits further use of Anthropic technology, government guidance included a six‑month transition period for agencies to shift off Claude while maintaining operational capability. This timeline appears to explain the military’s continued use of Claude for the Iran operation.
Experts note that Claude had been deeply integrated into classified military networks — partly due to its earlier partnerships and Pentagon contracts — making an immediate cut‑off operationally complex.
Operational and Policy Implications
The deployment of Claude in an active combat environment has triggered fresh debate over the role of private AI companies in national defense. Critics argue that reliance on commercial systems highlights gaps in policy alignment between civilian regulatory action and military needs, while others warn that such AI use raises ethical, legal, and accountability challenges in warfare planning.
Some technology policy analysts say this episode highlights a broader issue: the rapid integration of AI into defense operations is outpacing the development of coherent, consistent policy frameworks.
Future of Military AI Strategy
In response to the rift with Anthropic, rival AI providers have reportedly moved to fill the gap. OpenAI has been named as a Pentagon partner for ongoing classified AI support, with its models slated to replace Claude over time as military systems transition.
Defense planners now face a pivotal question: how to balance national security imperatives, ethical limits on AI use in lethal operations, and the strategic risks of over‑dependence on private technology firms.
As this policy and technology dispute continues to unfold, it is likely to influence broader debates on AI governance, national security strategy, and the future role of privately developed AI systems in public sector missions.
Business News
AI Safety Debate Intensifies As Musk And Altman Face Off Again In Legal Battle
The debate over artificial intelligence safety and ethical responsibility has reignited as Elon Musk, CEO of Tesla and owner of X, and Sam Altman, CEO of OpenAI, faced off in court over the regulation and risks of AI technology. The ongoing legal dispute now spotlights not only company accountability but broader concerns about user safety and regulatory oversight.
Musk Pushes Safety-First AI Approach
During the hearing, Musk emphasized that AI development must prioritize human safety. He cited his company’s AI system, Grok, noting that no suicide-related incidents have been linked to its use. Musk also raised concerns—though unverified—regarding potential mental health risks associated with OpenAI’s ChatGPT.
Musk argued that rapidly advancing AI systems lacking rigorous safety protocols could pose future societal risks. He stressed that AI must be evaluated not only for technological innovation but also for its impact on human welfare.
OpenAI Defends Its Safety Measures
OpenAI countered by affirming its ongoing efforts to strengthen the safety and reliability of its platforms. The company emphasized that systems like ChatGPT are designed to provide information, assist productivity, and enhance decision-making. OpenAI also cautioned against attributing complex incidents, such as suicides, directly to AI, noting that multiple social and personal factors contribute to such outcomes.
Implications for AI Policy
Legal analysts suggest that the case may set a precedent for AI governance beyond the two companies involved. With generative AI technologies increasingly embedded in education, healthcare, business, and communication, courts and policymakers are under pressure to define clearer safety standards, accountability measures, and data protection requirements.
Experts say the dispute underscores a broader challenge: balancing AI innovation with ethical responsibility, mental health considerations, and user protection. The outcome could influence not only corporate AI policies but also future regulatory frameworks in the United States and potentially internationally.
As the proceedings continue, the tech industry is closely monitoring the case, recognizing it as a defining moment for AI safety, ethical responsibility, and the governance of emerging technologies.
-
Business3 years agoPot Odor Does Not Justify Probable Cause for Vehicle Searches, Minnesota Court Affirms
-
Business2 years agoNew Mexico cannabis operator fined, loses license for alleged BioTrack fraud
-
Business2 years agoAlabama to make another attempt Dec. 1 to award medical cannabis licenses
-
Business3 years agoWashington State Pays Out $9.4 Million in Refunds Relating to Drug Convictions
-
Business2 years agoMarijuana companies suing US attorney general in federal prohibition challenge
-
Business3 years agoLegal Marijuana Handed A Nothing Burger From NY State
-
Business3 years agoCan Cannabis Help Seasonal Depression
-
Blogs3 years agoCannabis Art Is Flourishing On Etsy
