[link]
Is the market efficient? This is perhaps the most prevalent question in all of finance. While this paper does not aim to answer that question, it does frame it in an informationtheoretic context. Mainly, Maymin shows that at least the weak form of the efficient market hypothesis (EMH) holds if and only if P = NP. First, he defines what efficient market means: "The weakest form of the EMH states that future prices cannot be predicted by analyzing prices from the past. Therefore, technical analysis cannot work in the long run, though of course any strategy could make money randomly." For $n$ past historical price changes of {UP, DOWN}. Let there be three trading strategies that are neutral, long or short to the market. In order to check if there exists a strategy that statistically significantly makes money, requires checking all $3^n$ possible strategies. Verifying that a strategy is profitable beyond random chance can be done with a linear $O(n)$ pass of the historical data. Thus the problem of finding a profitable strategy is NP. If P=NP, then computing a profitable strategy can be done efficiently in polynomial time, since a trader can check each possible strategy in polynomial time. We can then trade based on our results to make the market efficient as well. If the market is efficient, it becomes impossible for a trader to predict future prices based on historical data, as the current price has all publicly available information "priced in". A future price would be a random fluctuation of the current price. Does an efficient market imply P=NP? An example 3SAT: $$(a \lor b \lor !c) \land (a \lor !b \lor d)$$ The NP problem of 3sat can be encoded into the market using ordercancelsorder (OCO) orders[^1]. Where each variable is a security and negated variables are sell orders. Place these two OCOs in the market. $$\text{OCO (buy A, buy B, or sell C)}$$ $$\text{OCO (buy A, sell B, or buy D)}$$ After some constant time, any outstanding order is cancelled and all positions are liquidated. If the market is efficient, then there exists a way to execute these two OCO orders such that an overall profit is guaranteed. In other words, a market that is efficient allows us to solve an arbitrary 3SAT problem in polynomial time. ### **Takeaway**: The author links market efficiency with computational complexity. [^1]: An ordercancelsorder is a type of order on two different securities that automatically cancels the other order when one is filled. There is no chance that both orders can be filled. 
[link]
Formatpreserving encryption is a deterministic encryption scheme that encrypts plaintext of some specified format into ciphertext of the same format. This has a lot of practical use cases such as storing SSN or credit card information, without having to change the underlying schematics of the database or application that stores the data. The protected data is indifferentiable from unprotected data, and still enables some analytics over it, such as with masking (ie, displaying last four digits of a format). For other analytics, and depending on the use cases, differential privacy or FHE should be considered. This paper primarily describes construction of a shortspace FPE scheme. Starting off with earlier constructions based off of work done by Terence Spies and M. Bellare. The authors have developed an FPE scheme that is tweakable, to enhance security and this value is imperative for ciphertext alphabets of small range. A tweak value is like an initialization vector, chosen by the user. This ensures that with two different tweak values, the same plaintext encrypted twice will encipher to different ciphertexts. Of particular interest for practical applications is 'cyclewalking' and 'rankthenencipher'. These are "metatechniques" for building an FPE scheme that essentially ensures you get a ciphertext in the desired ciphertext space. misc (pg 9): cyclewalking: If you wanted to encipher on some message space $X$, you would make an FPE scheme: $$E: K \times \chi \rightarrow \chi$$ If you know how to encipher onto a superset of $X'$, with an FPE scheme: $$E' : K \times \chi' \rightarrow \chi'\\$$ All you would need to do is keep enciphering the message $X \in \chi'$ until you get $X \in \chi$ RankthenEncipher: Encipher a point $X \in \chi$ map it to a corresponding point $X' \in \chi'$, encipher that point in $\chi'$ to get a ciphertext $Y' \in \chi'$, then map $Y'$ to its corresponding point $Y$ in $\chi$. 
[link]
Brakerski and Vaikuntanathan introduce a fully homomorphic encryption scheme (FHE) based solely on the decisional learning with errors (LWE) security assumptions. Moving away from the relatively obscure mathematics of ideal lattices. They introduce relinearization and modulus switching techniques for dimensionality reduction and for removing the “squashing” step of Craig Gentry’s FHE scheme. BV11 and other similar schemes are commonly referred to as “Second generation FHE” schemes. Reliearnization lowers the dimensions of the produced ciphertext, and allows the construction of a somewhat homomorphic encryption scheme that is capable of evaluating $O(\log n)$ depth (ie, multiplicative depth, circuit). With every homomorphic multiplication, the ciphertext error is squared. To slow down the growth of the error, BV11 shows that you can choose a new secret key used in relinearization to have small modulus $p$. Thus taking the ciphertext dimensions from $(n,\log q)$ down to $(k, \log p)$, where it is small enough to achieve a bootstrappable scheme, thus making it FHE. $$\\ $$ Below is an example of relinearization for multiplicative homomorphism from the paper. Mostly for my own notes, but may help give additional intuition for others. What confused me was the "publishing" of the linear and quadratic terms in $s$. What they mean is that an evaluation key is published, containing the 'encryptions' of the linear and quadratic variables. For example, if $n=2$, this would be the set of encryptions for secret key $s$ : Enc ${\{s_1, s_2, s_1s_2, s_1^2, s_2^2\}}$. Given a ciphertext $(\mathbf{a},b)$ and $(\mathbf{a}',b')$, we can describe homomorphic multiplication with the decryption function $f$: $$\begin{align} f_{(\mathbf{a},b)}(x) \cdot f_{(\mathbf{a'},b')}(x) &= (b  {\sum_{i}^n} a[i] \cdot x[i]) \cdot (b'  {\sum_{i}^n} a'[i] \cdot x[i])\\ &= h_0 + \sum_{i}^n h_i \cdot x[i] + \sum_{i,j}^n h_{i,j} \cdot x[i]x[j] \\ \end{align} $$ where $h_i$ and $h_{i,j}$ are coefficients of the 2degree polynomial in $\mathbf{x}$. $$\begin{align} h_0 &= bb' \\ h_i &= (b\mathbf{a}'[i] + b'\mathbf{a}[i]) \\ h_{i,j} &= \mathbf{a}[i]\mathbf{a}'[j] \end{align}$$ With the secret key $s \in \mathbb{Z}_q^n$, we evaluate $f$ on $s$ to get the plaintext. Because the decryption function needs to know all of the coefficients of the quadratic polynomial to be able to decrypt correctly, the size of the ciphertext jumps from $n+1$ to an order of $\approx n^2$. What BV11 shows is a relinearization technique that reduces the size of the ciphertext back down to $n+1$. To do this, we create an evaluation key $evk$ that is made up of the encryption of the quadratic terms of $x$. The ciphertexts will be the set $\{(\mathbf{a}_i,b), (\mathbf{a}_{i,j},b)\}$ Note: $\mathbf{a}$ is a random vector $\in \mathbb{Z}_q^n$ where $$\begin{align} b_{i,j} &\approx \langle \mathbf{a}_{i,j},t\rangle + s[i] \cdot s[j] \\ b_{i} &\approx \langle \mathbf{a}_{i},t\rangle + s[i] \end{align} $$ Now we can rewrite the symbolic multiplication of the decryption function as: $$\begin{align} f_{(\mathbf{a}_{mult},b_{mult})}(t) &= h_0 + \sum_{i}^n h_i (b_i  \langle \mathbf{a}_i,\mathbf{t} \rangle) + \sum_{i,j}^n h_{i,j} (b_{i,j}  \langle \mathbf{a}_{i,j}, \mathbf{t} \rangle) \\ \end{align} $$ This is a linear function in $\mathbf{t}$! The number of coefficients of this linear equation are at most $n+1$. $$\begin{align} \mathbf{a}_{mult} &= \sum_{i}^n h_i \mathbf{a}_i + \sum_{i,j}^n h_{i,j} \mathbf{a}_{i,j}\\ \mathbf{b}_{mult} &= h_0 + \sum_{i}^n h_i \mathbf{b}_i + \sum_{i,j}^n h_{i,j} \mathbf{b}_{i,j} \end{align} $$ ## See Also [lecture](https://www.youtube.com/watch?v=MB3mSaG6Bro) from MathNet Korea (potato res). 
[link]
Cover's Universal Portfolio is an informationtheoretic portfolio optimization algorithm that utilizes constantly rebalanced porfolios (CRP). A CRP is one in which the distribution of wealth among stocks in the portfolio remains the same from period to period. Universal Portfolio strictly performs rebalancing based on historical pricing, making no assumptions about the underlying distribution of the prices. The wealth achieved by a CRP over n periods is: $S_n(b,x^n) = \displaystyle \prod_{n}^{i=1} b \cdot x$ The key takeaway: Where $\mathrm{b}$ is the allocation vector. Cover takes the integral of the wealth over the entire portfolio to give $b_{t+1}$. This is what makes it "universal". Most implementations in practice do this discretely, by creating a matrix $\mathrm{B}$ with each row containing a combination of the percentage allocatio, and calculating $\mathrm{S} = \mathrm{B\dot x}$. Cover mentions trading costs will eat away most of the gains, especially if this algorithm is allowed to rebalance daily. Nowadays, there are commissionfree brokers. See this summary for Universal Portfolios without transaction costs: \cite{conf/colt/BlumK97}
2 Comments
