Deepfakes

 DEEPFAKES

The Centre issued an advisory to social media intermediaries to identify misinformation and deepfakes. 
Deepfake refers to a type of synthetic media, typically video or audio, that is generated using artificial intelligence techniques. The term "deepfake" is a combination of "deep learning" (a type of machine learning) and "fake." Deepfake technology has the ability to manipulate or alter existing images or videos to make them appear authentic but are actually fake or manipulated.

Specifically, deepfakes use machine learning algorithms, such as deep neural networks, to analyze and imitate patterns in data, such as a person's face or voice. By training the algorithm with large amounts of data, it can generate highly realistic images or videos of someone saying or doing things that they may have never actually said or done.

Deepfakes have gained attention and concern due to their potential for misuse and exploitation. They can be used to create fake news, spread disinformation, manipulate public figures, or even be used for non-consensual pornography.

Countermeasures to deepfake technology include developing detection methods to identify manipulated media, raising awareness about the existence and potential risks of deepfakes, and adopting digital literacy practices to help individuals critically evaluate the authenticity of media they encounter online.

 Key Provisions of the Advisory : 

 ● Identify deepfakes: Ensure that Due diligence is exercised and reasonable efforts are made to identify misinformation and deep fakes.
 ● Quick action: Such cases are expeditiously actioned against, well within the timeframes stipulated under the IT Rules 2021.  
 ● Caution for users: Not to host such information/content/Deepfakes.
● Time period: Remove any such content when reported within 36 hours of such reporting.
●  Expeditious action: Well within the timeframes stipulated IT Rules 2021, and disable access to the content/information. Deepfake imagery could be an imitation of a face, body, sound, speech, environment, or any other personal information manipulated to create an impersonation. 
Deepfakes refers to a video/image that has been edited using an algorithm to replace a person in the original video/image with someone else, in a way that makes the video look authentic.Deepfakes use a form of artificial intelligence called deep learning to make images of fake events, events that haven't happened. Deep learning is a machine learning subset, using artificial neural networks inspired by the human brain to learn from large data sets.
Deepfake imagery could be an imitation of a face, body, sound, speech, environment, or any other personal information manipulated to create an impersonation.

How does Deepfake work? 

● Deepfakes uses technologies of deep learning, AI and photoshopping to create images of events.
  ■ The technologies namely, GANs (Generative Adversarial Networks) (a class of Machine Learning) are interplayed to create the videos.
  ● Deepfakes also use Generative Adversarial Networks (GANs), which consist of generators and discriminators. Generators take the initial data set to create new images.  Then, the discriminator evaluates the content for realism and does further refinement. 
● Deepfakes also employ a deep-learning computer network called a variational auto-encoder, a type of artificial neural network that is normally used for facial recognition.  Auto-encoders detect facial features, suppressing visual noise and “non-face” elements in the process. They enable a versatile “face swap” model using shared features of person/image etc. Issues associated with Deepfake
 ● Misinformation and Disinformation: Deepfakes can be used to create fake videos of politicians or public figures, leading to misinformation and potentially manipulating public opinion.
 ● Privacy Concerns: Deepfakes can be used to damaging content featuring individuals without their consent, leading to privacy violations and potential harm to reputations.  Deepfakes are, thus, a breach of personal data and a violation of the right to privacy of an individual. 

● Lack of Regulation: Major issue is the lack of a clear legal definition of deepfake technology and the activities that constitute deepfake-related offences in India. Thus, it becomes difficult to prosecute individuals or organisations that engage in malicious or fraudulent activities using deepfakes.

 ● Challenges in Detection: Developing effective tools to detect deepfakes is an ongoing challenge, as the technology used to create them evolves. 

●Gender inequity: Women form about 90% of the victims of crimes like revenge porn, non-consensual porn and other forms of harassment.  o Deepfake adds one more to the list, thus, shrinking online space for women.
 ● Erosion of trust: The prevalence of deepfakes challenges the trustworthiness of media content, making it more difficult for people to rely on what they see and hear. 

● Ethical challenges: Balancing the need to combat the negative impacts of deepfakes with the protection of free speech and artistic expression poses a complex ethical challenge. 


Opportunities with Deepfake technology

 ● Entertainment: Voices and likenesses can be used to achieve desired creative effects.
 
●E-commerce: Retailers could let customers use their likenesses to virtually try on clothing. 
●Communication: Speech synthesis and facial manipulation can make it appear that a person is authentically speaking another language.
 ● Research and Simulation: It can aid in training professionals in various fields by providing realistic scenarios for practice, such as medical training. 

Regulatory measures applicable to deepfakes 


●Legal provisions in India: In India, there are no specific legal provisions against deepfake technology. However, some laws indirectly address deepfake, viz.,  Section 66E of the IT Act of 2000: An act involving capturing, publishing, or transmitting a person’s images in mass media, violates their privacy. Section 66D of the IT Act of 2000: Provides a provision to prosecute individuals who use communication devices or computer resources with malicious intent, to cheat or impersonate someone.Indian Copyright Act of 1957: Provides for penalties for the infringement of copyright. 

● Global measures against Deepfakes:

 
■ Bletchley Declaration: Over 25 major countries including India, United States, China, Japan, and UK called to tackle the potential risks of AI.
■ Digital Services Act of EU: Obligates social media platforms to adhere to labelling obligations, enhancing transparency and aiding users in determining the authenticity of media. 
■Google announced tools: Watermarking to identify synthetically generated content. 

Way ahead

☆ Strengthening legal framework: Need to establish and update laws and regulations specifically addressing the creation, distribution, and malicious use of deepfake and associated content.  

☆Promote Responsible AI Development: Need to encourage ethical practices in AI development, including the responsible use of deep learning technologies. 
 ☆Asilomar AI  Principles can act as a Guide to ensuring safe and beneficial AI development.  

☆ Responsibility and Accountability of social media platforms: The need will be to create a uniform standardization that all channels can adhere to and is common across borders. For example, YouTube has recently announced measures requiring creators to disclose whether the content is created through AI tools.  
☆ International Cooperation:Establish shared standards and protocols for combating use of deepfakes across borders.  ☆Invest in  Research  and  Development:  Allocate resources to support ongoing research into deep fake technologies, detection methods, and countermeasures. 



Comments

Popular posts from this blog

Proteins

Cytoplasm

Nucleic Acids