In the last session of AsiaCrypt2015, Muthuramakrishnan Venkitasubramaniam presented his paper ``Secure Computation from Millionaire'', joint work with abhi shelat. Given a function f , the standard way to perform secure computation for f consists of two different steps: in the first f is transformed into either an arithmetic/boolean circuit or a RAM program, in the second a generic secure computation protocol (for example Yao/GMW based, or information theoretic) is applied to perform the actual evaluation. In his talk Muthuramakrishnan described a different approach for designing secure computation protocols.

The method he described appeared first at Eurocrypt 2004, in the work of Aggarwal, Mishra and Pinkas, where it was used for computing the median: Alice holds a private dataset S_A and Bob a private dataset S_B, each party computes the median of its own dataset, say m_A for Alice and m_B for Bob, and jointly compare them. If for example m_A < m_B, then Alice deletes all the values in S_A smaller than m_A and Bob deletes all the values in his dataset bigger than m_B. Thereafter, they recursively repeat this step on the smaller dataset. By replacing each comparison with a small secure protocol, it is possible to prove security of the overall protocol in the semi-honest model. This technique was later generalized by Brickell and Shmatikov and applied to solve shortest path problem.

The natural question which now arises is: what else can we compute using the comparison (millionaire) function? The authors generalized the techniques from both the aforementioned papers to a large class of problems (matroid optimizations, convex hull, job scheduling, set cover) that can be seen as instantiations of the greedy-algorithms paradigm. The main idea is that of reducing these problems to iterative comparison operations and gradually revealing the answer, as for the median computation. Unfortunately, this implies that simulation-based security can only be guaranteed (under certain conditions) in the semi-honest and covert setting, because a malicious adversary could adaptively abort in the middle of the computation.

Muthuramakrishnan concluded his talk with interesting open problems, i.e. trying to find examples that admit malicious security and generalize their framework to other primitives or paradigm.