New Delhi: The Delhi High Court on Thursday (October 24) directed the Centre to file a status report on measures taken by it to counter the increasing menace of the deepfake technology.
A bench comprising Chief Justice Manmohan and Justice Tushar Rao Gedela was hearing two pleas filed by journalist Rajat Sharma and lawyer Chaitanya Rohilla seeking regulation of deepfakes technology.
It required to be dealt with by authorities on a priority basis: High Court
The High Court, while calling the increasing menace of the deepfake technology a very serious issue, said that it required to be dealt with by the authorities on a priority basis.
The bench also asked for the report to highlight measures taken at the level of the government and whether there would be a high-powered committee to suggest solutions.
MeitY looking into the issue, Centre told the High Court
The Additional Solicitor General (ASG), who represented the Centre, told the court that the union ministry of Electronics and Information Technology (MeitY) was looking into the issue.
The counsel appearing for one of the petitioners told the bench that several countries had enacted a legislation to deal with the issue and India was far behind. The counsel further said that most deepfakes, were related to women, including nudity, and the authorities concerned were unable to resolve the issue.
The court, during the hearing, observed that the artificial intelligence (AI) could not be prohibited as people needed it and said that “We have to remove the negative part of the technology and keep the positive part,” news agency PTI reported.
High Court earlier expressed its concern on misuse of deepfake technology
The court earlier had, while expressing its concerns on the misuse of the deepfake technology, observed that it is going to be a serious menace to society and had said that the Centre must start work on regulating the spread of deepfake technology.
The plea has said that there is a threat of potential misuse of deepfake technology and there is a pressing need for strict enforcement and proactive action to mitigate potential harm associated with the misuse of deepfake technology.