• 3 Posts
  • 199 Comments
Joined 8 months ago
cake
Cake day: October 18th, 2023

help-circle











  • It’s a surprisingly good comparison especially when you look at the reactions: frame breaking vs data poisoning.

    The problem isn’t progress, the problem is that some of us disagree with the Idea that what’s being touted is actual progress. The things llms are actually good at they’ve being doing for years (language translations) the rest of it is so inexact it can’t be trusted.

    I can’t trust any llm generated code because it lies about what it’s doing, so I need to verify everything it generates anyway in which case it’s easier to write it myself. I keep trying it and it looks impressive until it ends up at a way worse version of something I could have already written.

    I assume that it’s the same way with everything I’m not an expert in. In which case it’s worse than useless to me, I can’t trust anything it says.

    The only thing I can use it for is to tell me things I already know and that basically makes it a toy or a game.

    That’s not even getting into the security implications of giving shitty software access to all your sensitive data etc.






  • Source: I’ve been an embedded sw engineer for 10+ years

    This seems like a pretty decent resource generally speaking. I’ll add this caveat though.

    If your threat model includes anyone with large state level resources, you should stay very far away from anything with a radio in it. Wifi, Bluetooth, NFC, whatever, it doesn’t matter. It is possible for it to be compromised at a silicon level, which means you can never be sure it is fully secure.

    You have to assume that anything transmitted via RF of any type is capable of being collected and compromised.

    All that said, if your concern actually does include people with black helicopters, you already know this, and if it doesn’t, just remember that these technologies are getting cheaper and more ubiquitous all the time (see stingray), so be careful.