Thread model

Given rendering tasks each contains basically a watermarked scene G^\hat{G} and some range of frames required to be rendered, the goal of an attacker, namely a malicious worker (or in general a group of maliciously colluding workers [12]), is to generate rendered frames that pass the noise verification, with computational costs significantly lower than doing render this range by some conventional rendering software.

By Kerckhoff's principle, it is essential that the attackers know the noise generation and verification algorithms, working parameters including trade-offs. But the task identification numbers and the corresponding verification keys are kept secret. Furthermore, we require a strong assumption that the attackers cannot detect the existence of watermarks in scenes. That means attackers can analyze and even modify different watermarked scenes G^\hat{G}(s) but they cannot distinguish objects of the noise wrapping vector Ω\Omega (discussed in detail in the noise generation) embedded in G^\hat{G}(s) from original graphical objects of GG. Otherwise, we assume no constraint on the communication capability of colluding attackers.

Not surprisingly, the security of ANGV can be modeled as the problem of sending steganographic messages over a public communication channel with passive adversaries [13], [14]. Indeed, let us consider some graphics scene, by repetitively receiving rendering tasks for this scene and sending results (both genuinely rendered and intentionally forged), an attacker (or set of colluding attackers) knows a set of accepted and rejected images. The attacker analyzes these tested images to estimate probability distributions PSP_{\mathcal{S}} and PCP_{\mathcal{C}} for respectively images that would pass the noise verification and images that would be rendering results of the scene. We use the traditional notations of the steganography literature: CC for cover-work and SS for stego-work [15].

The information-theoretic security of ANGV is quantified by the Kullback–Leibler divergence (i.e. relative entropy) D(PCPS)D\left(P_{\mathcal{C}} \mathrel{\Vert} P_{\mathcal{S}}\right) of PCP_{\mathcal{C}} from PSP_{\mathcal{S}}. Concretely, ANGV is called ϵ\epsilon-secure if

limnD(PCPS)ϵ\lim_{n \to \infty} D\left(P_{\mathcal{C}} \mathrel{\Vert} P_{\mathcal{S}}\right) \leq \epsilon

where nn is the number of tested images. In particular, ϵ=0\epsilon = 0 if and only if PC=PSP_{\mathcal{C}} = P_{\mathcal{S}}, or the attacker cannot distinguish watermarked images from genuinely rendered ones, in this case we have perfect security.

Remark. The distributions PSP_{\mathcal{S}} and PCP_{\mathcal{C}} represent partial knowledge of the attacker obtained by analyzing the set of tested images: the larger this set (or the larger nn), the more precise estimation for PSP_{\mathcal{S}} and PCP_{\mathcal{C}}.

Last updated