• 1 Post
  • 857 Comments
Joined 11 months ago
cake
Cake day: August 5th, 2023

help-circle
  • I’ve actually gone out of my way to avoid it but that has nothing to do with the accuracy of the results (although I would need those results to be accurate), and everything to do with avoiding ads and using the search web function to find very specific and detailed information rather than a summary.

    In my short experience with the AI features for search specifically, I have experienced not being able to see the source of that information without having to click through and scroll down or continue a conversation with prompts. I don’t want that. It very often slows down my work flow and that’s the intention. To keep me on the page making additional queries and looking at more ads.

    I have experienced Gemini with my phone though and it’s actively worse than google assistant and home assistant in a lot of ways. Features that have allowed me for years to control smart devices and have been broken or unreliable. More so than the results of the Sonos lawsuit.

    I want my devices to work. I don’t want to have a conversation with a device to turn on lights or find out what the weather is like. Bottom line, the point of my comment was that (obnoxious to you or not), nobody is under attack for using AI products.





  • It is until you end up having to blacklist zelle because your banking information was used to defraud someone. I actually had my account broken into, funds deposited from zelle and then all available funds removed from my account in the space of about an hour. Went to pay for something the day after and had to call my bank’s fraud department. They tried the same thing with a second account of mine but it was flagged immediately when they tried to use the same login credentials (they weren’t remotely the same). So no zelle for me. It’s permanently disabled by both my banks for security reasons.


  • Which makes the point that while AI LLM’s can be useful and can be improved, hamfisting them into every product you make as a company because you have FOMO is ill advised and aggravating, especially when you pay people to be subject matter experts in the field and they tell you it’s a bad idea. That’s what the article said in some very verbose language. Your attention span must be severely lacking because you could read the article and glean that simple point from the words on the page. I read it and it was entertaining and insightful.

    You seem like someone who might need paragraphs to be a single sentence.


  • I’m inclined to believe, based on this thread, that you and the person you’re replying to didn’t read the article because the person who wrote it and most of the replies to it are not saying “LLM’s are garbage and have no benefits”.

    The post is specifically calling out companies that have jumped on the “AI LLM” train who are trying to force feed it into every single project and service regardless of whether it will be useful or beneficial or not. And they will not listen to people working in the field who tell them no it will not be beneficial.

    The hype is what people are upset about because companies are selling something that is useful in selective cases as something that will be useful to everyone universally for just about everything and they’re making products worse.

    Just look at Google and their implementation of AI LLM’S in search results. That’s a product that isn’t useful unless it’s accurate. And it was not ready to be a public facing service. In their other products it’s promising more but actually breaking or removing features that users have been using for years. That’s why people are upset. This isn’t even taking into account the theft that went on of people’s work to get these LLM’S trained.

    This is literally just about companies having more FOMO than sense. This is about them creating and providing to the public broken interactions of products filled with the newest “tech marvel” to increase sales or stock price while detrimentally affecting the common user.

    For every case of an LLM being useful there are several where it’s not. That’s the point.





  • The point is to divorce the situation from the cheating aspect, so that people can be less emotionally invested in the outcome. Plenty of jobs that handle industry sensitive information do so over normal communication lines. DARPA was possibly a poor example because the assumption from you is that anything handled by them requires a clearance (which I wouldn’t consider to be true). Something as simple as tracking the whereabouts of a naval ship can and has been done via Facebook posts from people onboard or their families.

    The point is that it wasn’t clear to the user that their information wasn’t being deleted in real time and that’s poor transparency on the part of the company because a lot of users probably assume the same just based on the comments I see here.


  • Did they need a slash s for this? Did they? Because people like you make me believe they needed a slash s. Like. Obviously this was a sarcastic comment because the original comment they responded to was horribly fallible. There are whole industries built on the idea that an industry can be destroyed by liability. It’s literally why we have liability insurance. So when someone responds to that comment with an equally fallible statement that is clearly meant to be sarcastic we just ignore that because we feel that their statement is wrong? What even is this.


  • On the one hand, I don’t know that it’s fair to sue a company over your poor understanding of technology, or user error. On the other hand, if he worked for DARPA and was using imessage to talk to his boss or his team about a project that was then leaked or sold by someone living in his home who had access to his home laptop because he didn’t know that the messages he deleted weren’t deleted in real time, and he was fired from his job, that seems like something the company should make very clear when deleting the messages in the first place. A simple warning “Delete this message? Please be aware that deletion is not instantaneously across devices.” Would do.

    Incognito mode actually has to tell users that it doesn’t prevent your ISP from seeing what you Google or what websites you visit while using it. They literally had to add a notification so people would know because people didn’t know.








  • I don’t think you understand just how prevalent this situation is, or just what they would need to do for me to “turn them in” for basically being on the wrong side of the political fence. For one, you’re assuming the person or persons in charge doesn’t feel the same way (chain of command isn’t the kind of thing you just skip because some of them happen to be suspect). Second, they actually have to do something against the UCMJ for me to “turn them in”. Thinking that the government should be overthrown in the event that it over steps is constitutional. Thinking you could overturn a free and legal public election is not constitutional, but it’s also not against the rules.

    You can’t turn people in for thinking. Only for acting. You’re kind of coming off as a troll and I’m done with you following me through the thread.