 Hello, my name is Lakaia Tyner and I am presenting nearly optimal property preserving caching. This is joint work with Justin Holmstrom, Minghao Lu, and Daniel Wicks. Let's begin by defining a property preserving hash function. We have this hash function h, which takes as input some n-bit string x and outputs a compressed n-bit string h of x. You can consider some binary property p, which is equal to zero if the property does not hold and one if the property holds. And we say that h is a property preserving hash function for the property p. If given h of x and h of y, we're able to determine if p of x, y holds. Now an nm property preserving hash family for some predicate p consists of a family of hash functions h in the following algorithms. We have sam, which samples one hash function from that hash family. An eval, which takes as input a description of h, h of x and h of y and outputs zero or one. We also have following properties for correctness. We have that for every x, y. The probability that eval outputs something different than p of x, y is negligible. If we want robustness, we must also satisfy this next property, which says that for every ppt algorithm a, correctness still holds even when a, who has seen a description of our hash function, is able to choose x and y. So why do we care about studying property preserving hashing? Pbh can be a useful tool in measuring the similarity between objects and can also have some practical applications and things such as facial recognition. In this work, we focus mainly on the hamming predicate. And this is because hamming serves as a basic unit of measure and it's also a prerequisite to understanding some more complex metrics. Hamming is also currently being used as a method for measuring similarity between feature vectors and machine learning. Here are our results. We have conceptually more simple constructions compared to prior works and we're able to achieve better parameters under minimal assumptions. Our first construction is an information theoretic non-robust pbh, which uses the center decoding of linear error correcting codes. In our work, we also prove a lower bound that shows that this construction is essentially optimal. Our next construction is a robust pbh from homomorphic collision resistance, which shows how to take our first construction and make it robust while requiring only minimal overhead. This construction is also able to achieve better compression than prior works. Our third construction is another robust property preserving hash, which while achieving a slightly worse compression than our second construction is able to basically match the state of the art while only requiring minimal assumptions such as standard collision resistance. And finally, with our last construction, we study a new notion of randomized robust property preserving hashing for hamming and we're able to provide a information theoretic construction that achieves optimal parameters. Thank you so much. We are looking forward to presenting our work to you guys at crypto. See you there.