Describe an example of indistinguishability obfuscation or functional encryption

source: https://security.stackexchange.com/questions/50937/describe-an-example-of-indistinguishability-obfuscation-or-functional-encryption/50972#50972

IO: Indistinguishability Obfuscation
FE: Functional Encryption
The second one using the first one as a building tool. The first construction provides indistinguishability obfuscation while the second one is functional encryption.

Indistinguishability obfuscation is a rather esoteric property, which is not what non-academic think about when they hear "obfuscation"; the terminology is a misnomer. That property means that if you can encode some "processing" as a circuit (roughly speaking a piece of code which can be unrolled, with no infinite loop), such that there are several possible distinct circuits which yield the same results, then indistinguishability obfuscation allows publication of an "obfuscated circuit" such that anybody can run the circuit and obtain the result, but outsider cannot know which of the possible circuits was used as a basis internally.

What good IO can make, by itself, is unclear. The authors, in section 1.7 of their article, still present an example, which is rather far-fetched:

Software developers will often want to release a demo or restricted use version of their software that limits the features that are available in a full version. In some cases a commercial software developer will do this to demonstrate their product; in other cases the developer will want to make multiple tiers of a product with different price points. In other domains, the software might be given to a partner that is only partially trusted and the developer only wants to release the features needed for the task.

Ideally, a developer could create a downgraded version of software simply by starting with the full version and then turning off certain features at the interface level -- requiring minimal additional effort. However, if this is all that is done, it could be easy for an attacker to bypass these controls and gain access to the full version or the code behind it. The other alternative is for a software development team to carefully excise all unused functionality from the core of the software. Removing functionality can become a very time consuming task that could itself lead to the introduction of software bugs. In addition, in many applications it might be unclear what can and cannot remain for a restricted use version.

One immediate solution is for a developer to restrict the use at the interface level and then release an obfuscated version of the program. For this application indistinguishability obfuscation suffices, since by definition a version restricted in the interface is indistinguishable from an obfuscated program with equivalent behavior that has its smarts removed at the start.

The use case is dubious, at best. However, IO has a more useful (theoretical) functionality, which the authors expose afterwards: it enables them to build functional encryption.

Functional encryption is about providing a computable circuit (obfuscated with IO) which receives as input encrypted versions of some value x, and returns F(x) for some function F, without revealing anything else about x. The authors show how they can do that for any function F which can be encoded as a circuit, and the resulting obfuscated circuit is "polynomially-sized" with regards to the original unobfuscated circuit implementing F.

Now this "polynomially-sized" expression tells us that we are in the abstract world of mathematics, and not in the practical world. It relates to asymptotic behaviour when circuit size grows towards infinity. It does not tell much about how much the construction would cost in any practical, finite situation; only that if God plays with Universe-sized computers, then He will find the construction to be "tolerably efficient" provided that He first creates a large enough Universe to accommodate the bulk of the involved computers -- with no a priori measure of how large that Universe has to be for the theoretical result to apply. Empirically, we find that most mundane algorithms that we manipulate and that offer polynomial complexity are "somewhat fast", but that's mostly because these algorithms are very simple things; there is no proof or even suggestion that the construction described in the article would be as "mundane".

Functional encryption, if it can be made to work within reasonable CPU and RAM budgets, can be useful for security in a number of situations. For instance, imagine a FPS game opposing players on networked computers. For efficient 3D rendering, the local machine of each player must know the terrain map and the current location of all objects, including other players, so that it may decide whether, from the point of view of the local player, any other player is visible (and must be drawn on the screen) or hidden (e.g. sneakily poised behind a wall, ready to lob a hand grenade) -- but cheaters would find it very convenient to know these locations in real-time. Each player is assumed to have complete control of his machine (he has physical access). With function encryption, the player's machine may compute the rendering (that's the F function) based on the locations as sent by the server, without obtaining the locations themselves.

Unfortunately, for practical applications, we are far from having a workable solution. What must be understood is that in all these constructions, each circuit gate must map to an instance of Gentry's fully homomorphic encryption scheme, and at every clock cycle for the obfuscated circuit, all gates must be processed, regardless of whether they are "active" or not in the circuit (this is a big part of why the obfuscation theoretically works: it does not reveal active gates, by always making them all active). This article gives performance result: on a rather big PC, we are up for minutes of computation. That's for each gate in the obfuscated circuit, and for each clock cycle.

There are millions or even probably billions of gates in the envisioned circuit, given the setup of functional encryption: the "obfuscated circuit" must include asymmetric decryption and validation of zero-knowledge proofs. So now we are talking about using all the computers currently running on Earth, all collaborating on the computation, and they might make non-negligible progress towards running one instance of functional encryption within a few centuries.

These are just estimates that I make from what I saw in the articles, but they should give you the order of magnitude. Namely, that the albatross metaphor falls very short of the reality; you'd rather have to imagine a full army of spaceships bent on galactic domination. If it flies at all, despite bureaucratic heaviness, it will have tremendous inertia and be quite expensive.

Read more:
[1] Hiding Secrets in Software: A Cryptographic Approach to Program Obfuscation (CACM 2016) [link]