Thread model
Given rendering tasks each contains basically a watermarked scene and some range of frames required to be rendered, the goal of an attacker, namely a malicious worker (or in general a group of maliciously colluding workers [12]), is to generate rendered frames that pass the noise verification, with computational costs significantly lower than doing render this range by some conventional rendering software.
By Kerckhoff's principle, it is essential that the attackers know the noise generation and verification algorithms, working parameters including trade-offs. But the task identification numbers and the corresponding verification keys are kept secret. Furthermore, we require a strong assumption that the attackers cannot detect the existence of watermarks in scenes. That means attackers can analyze and even modify different watermarked scenes (s) but they cannot distinguish objects of the noise wrapping vector (discussed in detail in the noise generation) embedded in (s) from original graphical objects of . Otherwise, we assume no constraint on the communication capability of colluding attackers.
Not surprisingly, the security of ANGV can be modeled as the problem of sending steganographic messages over a public communication channel with passive adversaries [13], [14]. Indeed, let us consider some graphics scene, by repetitively receiving rendering tasks for this scene and sending results (both genuinely rendered and intentionally forged), an attacker (or set of colluding attackers) knows a set of accepted and rejected images. The attacker analyzes these tested images to estimate probability distributions and for respectively images that would pass the noise verification and images that would be rendering results of the scene. We use the traditional notations of the steganography literature: for cover-work and for stego-work [15].
The information-theoretic security of ANGV is quantified by the Kullback–Leibler divergence (i.e. relative entropy) of from . Concretely, ANGV is called -secure if
where is the number of tested images. In particular, if and only if , or the attacker cannot distinguish watermarked images from genuinely rendered ones, in this case we have perfect security.
Remark. The distributions and represent partial knowledge of the attacker obtained by analyzing the set of tested images: the larger this set (or the larger ), the more precise estimation for and .
Last updated