gpt]
Rewrite the content fetched from
16 May Deepfakes and deep trust
Leo Levit, Chairman, ONVIF Steering Committee, explores why we need authentication technology in order to secure video evidence.
The rise of generative AI is expected to revolutionise the ways we use our security systems as well as the information that these systems will be able to provide. These promises of new levels of efficiency, accuracy and depth of data from artificial intelligence are capturing most of the attention surrounding AI and its influences on security.
However, the impacts from generative AI are not all positive, with some presenting serious threats to the integrity of the industry’s core set of technologies – video surveillance. One of the most pressing areas of concern is the growing prevalence of manipulated or ‘deepfake’ videos, which are made possible by the mainstream availability of video alteration tools based on gen-AI technology. Creating these videos used to require expert skills and expensive equipment, but can now be done using everyday apps, many of which are free to download or use.
While some instances of deepfakes or manipulated video are easier to spot – think a dinosaur wandering around an office lobby – other alterations to video can be seamless to the viewer because they don’t involve obvious changes in the video footage. For example, altering the timestamp of a video clip to a different day or time can provide incorrect information about when the event occurred. Taking out specific frames of a video clip from an event can remove the event or person of interest in the video, which means the clip does not accurately represent the scene. More extreme examples include substituting a face with that of another person in a scene or using generative AI to put a firearm in an individual’s hand.
Global issue
The threat of deepfakes has moved from theoretical concern to practical reality in recent years. In 2024, a multinational company in Hong Kong was tricked into wiring $25 million to fraudsters after participating in a video call with a deepfake of the company’s chief financial officer. Law enforcement agencies have also reported instances where manipulated surveillance footage was submitted as evidence in criminal cases, with timestamps and content altered to create false alibis.
This ability to alter video can ultimately pose significant challenges to organisational trust in video evidence and the industry’s ability to maintain the authenticity of surveillance footage, which can have severe consequences in many areas. Video is one of the most crucial pieces of evidence used in criminal investigations, court proceedings and internal corporate security investigations.
In many countries, there is a very robust chain of custody process required as part of law enforcement investigations and the admission of the video as evidence in court. Public distrust in video can easily lead to concerns about reasonable doubt in the eyes of a jury or judicial ruling in court proceedings and corporate investigations.
If the current legal precedents about the admissibility of video evidence are undermined by AI manipulation, courts may be forced to establish entirely new standards for this type of evidence. This could potentially exclude video evidence in cases where authentication cannot be definitively established.
Crimes against corporates
For corporate security, the stakes are equally high. Internal investigations rely heavily on surveillance footage to resolve incidents, ranging from workplace safety violations to theft and harassment claims. Human resources departments and corporate legal teams often base consequential decisions on video evidence. If this evidence is in doubt, organisations face increased liability risks, higher settlement costs and greater difficulty in fairly resolving disputes in the workplace. Insurance companies have also begun expressing concern about the ability to verify claims in an era of manipulable video, with some policies now specifically addressing digital evidence reliability.
The impacts extend beyond the courtroom and corporate settings. Public safety organisations, transportation systems, critical infrastructure protection and national security applications all rely on verified video for both real-time decision-making and after-action reviews.
As these threats continue to grow, traditional forensic techniques to safeguard video footage will not be enough to protect against generative AI’s ability to covertly and overtly alter surveillance video. This growing need for new solutions highlights the importance for industry collaboration and a standardised way to preserve the integrity of video and institutional trust in the footage as an accurate view of a situation.
Finding a solution with video signing
As a global standards organisation, ONVIF is working on a method of video authentication called video signing that provides proof that the video has not been altered since it has left the specific camera sensor that captured the video. Securing the video at its earliest point – when the sensor in the camera captures the video – is key to ensuring the authenticity and trustworthiness of the video footage from camera to court.
On a technical level, the method involves a camera having a unique signing key that is used to sign a group of video frames, where each frame is accounted for. The signature is then embedded in the video. When the video is played through a media player (like a standalone video player or video management client) that supports video signing and a trusted root certificate from the camera manufacturer, the media player can verify that the video data originated directly from that specific camera and has not been tampered with. If pixels in a video frame have been altered, or frames have been removed or reordered, the signature verification will fail and the video player signals that the video is not valid.
Simplifying authentication for law enforcement
Standardising video authentication enables a single mechanism to verify the authenticity of the video it has received. This can help to streamline processes for video users such as law enforcement and other criminal justice personnel, who deal with video footage generated by systems from many different manufacturers that might use a variety of methods for protecting video. In addition, by securing the video right from the specific camera that captured it, there’s no need to prove the chain of custody for the video. You can verify the video authenticity at every step — from the camera, to a person viewing the exported recording. With authentication provided at the point of capture, the video can be traced back to the device that recorded it.
Open source release
ONVIF is planning to release the implementation of video signing as an open source project. Providing the specification to the open source community will add transparency to the ONVIF method and make it easier for a wide community of developers to use it, helping the standard gain wider adoption in the security industry. Making these standards available via open source will also create transparency in the technical implementation, preserving trust in the authentication process and the integrity of the video itself.
Standardising this process for the security industry and others that rely on camera footage for other uses will provide consistency and reliability in the authenticity of video. ONVIF believes that video authentication at the source (from the camera) through video signing will provide the assurances needed to preserve trust in surveillance video.
into a completely fresh, human-written article that feels authentic and naturally written. The tone must reflect everyday human communication—professional, clear, and engaging without sounding like it’s generated by AI. Strictly avoid generic AI-style phrases, exaggerations, filler lines, or hallucinated content.
Structure the article with appropriate subheadings (H2, H3, etc.) and ensure it is *at least 500 words*. Each paragraph should be well-structured, focusing on a specific angle or detail from the source.
Incorporate *high-ranking SEO keywords* relevant to the topic where naturally appropriate—never forced. Prioritize keyword-rich phrases commonly searched online while maintaining readability and flow.
Use real-world phrasing, straight facts, and simple but intelligent language as used in human-authored blogs or news articles. Avoid summaries or conclusions; focus purely on rewriting the key points into a compelling narrative without inventing new ideas.
Do not add your own opinions or additional content—strictly rephrase and rewrite the original source material in a fresh, optimized, and human-sounding format.
[/gpt3]