With the increasing complexity of machine learning models, even the designers can’t understand how it functions (what input leads to a given output). Open source doesn’t mean safe at all. And even if it functions as intended, what happens wheb their is a vulnerability (or 0-zero day), or when the device reaches EOSL?
With the increasing complexity of machine learning models, even the designers can’t understand how it functions (what input leads to a given output). Open source doesn’t mean safe at all. And even if it functions as intended, what happens wheb their is a vulnerability (or 0-zero day), or when the device reaches EOSL?