Connect with us

Crime News

CBI Court Sentences 12 Convicts To 5 Years In 2011 MP PMT Vyapam Case

Published

on

Indore, December 28, 2025 – A decade after the Madhya Pradesh Professional Examination Board (Vyapam) medical entrance test scandal rocked the state, a special court in Indore has sentenced 12 individuals to five years of rigorous imprisonment for their involvement in cheating and impersonation during the 2011 MP Pre-Medical Test (PMT). Each convict was also fined ₹6,000.

The case, prosecuted by the Central Bureau of Investigation (CBI) under directives from the Supreme Court of India, highlights the persistent vulnerabilities in India’s competitive examination system and the sophisticated networks that exploited them.

Convicted Individuals and Their Roles

The convicted included candidates seeking unfair advantages, impersonators taking exams on behalf of others, and middlemen orchestrating the operations. The individuals named in the verdict are: Ashish Yadav (alias Ashish Singh), Satyendra Verma, Dheerendra Tiwari, Brijesh Jaiswal, Durga Prasad Yadav, Rakesh Kurmi, Narendra Chaurasiya, Abhilash Yadav, Khoob Chand Rajput, Pawan Rajput, Lakhan Dhangar, and Sunderlal Dhangar.

A separate accused, Deepak Gautam, was a minor at the time and had already been dealt with by the Juvenile Justice Board in Indore in July 2022, facing a penalty and bond under the Juvenile Justice Act.

How the Fraud Was Uncovered

The scam came to light on July 24, 2011, when exam officials discovered Satyendra Verma impersonating Ashish Yadav during the MP PMT. The incident prompted a formal complaint at Tukoganj police station in Indore. Initially, the state police charged two individuals, but the case later expanded after the Supreme Court ordered a CBI investigation as part of a wider scrutiny of Vyapam-linked frauds.

The CBI’s Findings

The investigation revealed a well-coordinated network. Middlemen recruited impersonators, arranged their stay in hotels, and provided forged documents and admit cards to enable them to appear for the exam. The prosecution relied on hotel records, documentary evidence, and confessions obtained during the investigation to establish the conspiracy, which the court upheld in its judgment.

This verdict adds another chapter to the ongoing legal proceedings stemming from the Vyapam scandal, which has repeatedly exposed systemic flaws in recruitment and entrance examinations in Madhya Pradesh.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business News

US Military Used Anthropic AI Despite Trump Ban, Raising National Security Policy Concerns

Published

on

By

The controversy over the use of artificial intelligence in U.S. national security operations has escalated after multiple reports indicated that Anthropic’s AI model Claude was deployed during U.S. military operations in the Middle East just hours after President Donald Trump issued a directive restricting federal agencies from using the company’s technology.

The Wall Street Journal and other outlets reported that the U.S. Central Command (CENTCOM) used Claude to support intelligence assessments, target identification, and battlefield simulations during coordinated airstrikes on Iranian targets — even as political leaders were distancing federal agencies from the company’s systems.

Background: Federal Ban and Supply Chain Risk Designation

Late last month, the Trump administration ordered all federal agencies to discontinue use of Anthropic’s AI tools, including Claude, citing national security concerns and labeling the company a supply chain risk after disagreements over how its models could be used in military and surveillance contexts.

Defense Secretary Pete Hegseth said that unrestricted access to AI technology was necessary for essential military applications — but Anthropic’s refusal to grant such access without safeguards prompted the supply chain risk label.

Six‑Month Phase‑Out Period

Although the federal directive formally prohibits further use of Anthropic technology, government guidance included a six‑month transition period for agencies to shift off Claude while maintaining operational capability. This timeline appears to explain the military’s continued use of Claude for the Iran operation.

Experts note that Claude had been deeply integrated into classified military networks — partly due to its earlier partnerships and Pentagon contracts — making an immediate cut‑off operationally complex.

Operational and Policy Implications

The deployment of Claude in an active combat environment has triggered fresh debate over the role of private AI companies in national defense. Critics argue that reliance on commercial systems highlights gaps in policy alignment between civilian regulatory action and military needs, while others warn that such AI use raises ethical, legal, and accountability challenges in warfare planning.

Some technology policy analysts say this episode highlights a broader issue: the rapid integration of AI into defense operations is outpacing the development of coherent, consistent policy frameworks.

Future of Military AI Strategy

In response to the rift with Anthropic, rival AI providers have reportedly moved to fill the gap. OpenAI has been named as a Pentagon partner for ongoing classified AI support, with its models slated to replace Claude over time as military systems transition.

Defense planners now face a pivotal question: how to balance national security imperatives, ethical limits on AI use in lethal operations, and the strategic risks of over‑dependence on private technology firms.

As this policy and technology dispute continues to unfold, it is likely to influence broader debates on AI governance, national security strategy, and the future role of privately developed AI systems in public sector missions.

Continue Reading

Business News

AI Safety Debate Intensifies As Musk And Altman Face Off Again In Legal Battle

Published

on

By

The debate over artificial intelligence safety and ethical responsibility has reignited as Elon Musk, CEO of Tesla and owner of X, and Sam Altman, CEO of OpenAI, faced off in court over the regulation and risks of AI technology. The ongoing legal dispute now spotlights not only company accountability but broader concerns about user safety and regulatory oversight.

Musk Pushes Safety-First AI Approach

During the hearing, Musk emphasized that AI development must prioritize human safety. He cited his company’s AI system, Grok, noting that no suicide-related incidents have been linked to its use. Musk also raised concerns—though unverified—regarding potential mental health risks associated with OpenAI’s ChatGPT.

Musk argued that rapidly advancing AI systems lacking rigorous safety protocols could pose future societal risks. He stressed that AI must be evaluated not only for technological innovation but also for its impact on human welfare.

OpenAI Defends Its Safety Measures

OpenAI countered by affirming its ongoing efforts to strengthen the safety and reliability of its platforms. The company emphasized that systems like ChatGPT are designed to provide information, assist productivity, and enhance decision-making. OpenAI also cautioned against attributing complex incidents, such as suicides, directly to AI, noting that multiple social and personal factors contribute to such outcomes.

Implications for AI Policy

Legal analysts suggest that the case may set a precedent for AI governance beyond the two companies involved. With generative AI technologies increasingly embedded in education, healthcare, business, and communication, courts and policymakers are under pressure to define clearer safety standards, accountability measures, and data protection requirements.

Experts say the dispute underscores a broader challenge: balancing AI innovation with ethical responsibility, mental health considerations, and user protection. The outcome could influence not only corporate AI policies but also future regulatory frameworks in the United States and potentially internationally.

As the proceedings continue, the tech industry is closely monitoring the case, recognizing it as a defining moment for AI safety, ethical responsibility, and the governance of emerging technologies.

Continue Reading

Business News

Real Estate Industry Shaken: Confident Group Chairman CJ Roy Shoots Himself at Office

Published

on

By

The real estate and entertainment industries were left stunned on Friday after Roy Chiriankandath Joseph, popularly known as CJ Roy, Chairman of Kerala-based Confident Group, allegedly shot himself at his Bengaluru office. Sources said Income Tax department officials were present at the premises at the time of the incident.

The incident took place around midday at Roy’s office-cum-residence located on Langford Road in central Bengaluru. According to preliminary information, Roy used a pistol that was in his possession. He was rushed in a critical condition to a private hospital and was later shifted to Narayana Health City, where doctors declared him dead during treatment.

Sources said an Income Tax team had arrived at the office as part of an official exercise. However, there has been no formal statement establishing a direct link between the presence of officials and the incident. Authorities have said that all angles are being examined and the circumstances leading to the incident are under investigation.

News of Roy’s death triggered shockwaves across the real estate sector and the Malayalam film industry. CJ Roy was the founder and chairman of Confident Group, one of Kerala’s prominent real estate developers, with business interests extending beyond India to the United States and the United Arab Emirates (UAE).

According to information available on the company’s official platforms, Confident Group has been active for over 19 years across real estate and allied sectors. The group has claimed to have completed more than 159 projects in Kerala, Bengaluru and Dubai, stating that none of its developments were stalled due to legal, financial or administrative reasons.

Roy often projected himself as a “zero-debt” entrepreneur, publicly maintaining that most of his projects were executed without availing bank loans. The company has also stated that several of its projects received CRISIL 7-star ratings, and that land titles for its developments underwent multi-level legal scrutiny before execution.

Beyond real estate, CJ Roy had also made a mark in the film production space. He made his debut as a producer with the big-budget Malayalam film Casanova in 2012. He later went on to associate with high-profile projects, including Mohanlal-starrer Marakkar: Lion of the Arabian Sea (released in 2021). More recently, he was the producer of Identity, starring Tovino Thomas.

Industry insiders said Roy was known for stepping in to invest in projects facing financial distress. He was also associated with the sponsorship of several television programmes and was actively involved in social and community welfare initiatives.

In business circles across Kerala and Karnataka, CJ Roy was regarded as an aggressive yet forward-looking entrepreneur. He was equally known for his fondness for luxury cars and his visibility in high-profile social initiatives, which often kept him in the public eye.

Following the incident, the Income Tax department and other agencies began collating information related to the case. Officials said further action would be guided by the post-mortem report and technical evidence gathered during the inquiry.

CJ Roy’s untimely death is being seen as a major blow to both the real estate and entertainment industries, with several industry figures expressing grief over his passing.

Continue Reading

Trending

Copyright © 2022 420 Reports Marijuana News & Information Website | Reefer News | Cannabis News