Pete Warden has some advice:
Over the last decade I’ve helped hundreds of product teams ship ML-based products, inside and outside of Google, and one of the most frequent questions I got was “How do I protect my models?”. This usually came from executives, and digging deeper it became clear they were most worried about competitors gaining an advantage from what we released. This worry is completely understandable, because modern machine learning has become essential for many applications so quickly that best practices haven’t had time to settle and spread. The answers are complex and depend to some extent on your exact threat models, but if you want a summary of the advice I usually give it boils down to:
– Treat your training data like you do your traditional source code.
-Treat your model files like compiled executables.
Read on to see why Pete came to this as the appropriate answer, as well as what I have to consider a sly mention of duck boat tours.