Gaywallet (they/it)

I’m gay

  • 224 Posts
  • 996 Comments
Joined 3 years ago
cake
Cake day: January 28th, 2022

help-circle
  • The pronouns are right there, in the display name . I’m confused, do they not show up for you? You’re on our instance so I’m guessing it’s not a front-end difference, but maybe you’re browsing on an app that doesn’t show it appropriately? Although I would mention their username itself includes the words “IsTrans” and is sourced from lemmy.blahaj.zone so those should be other key indicators.

    I was hardly about to ban you over a small mistake. The only reason I even replied to this, is that multiple people reported it and Emily herself came in and corrected you. The action was more about signaling to others that this is a safe space.





  • Any information humanity has ever preserved in any format is worthless

    It’s like this person only just discovered science, lol. Has this person never realized that bias is a thing? There’s a reason we learn to cite our sources, because people need the context of what bias is being shown. Entire civilizations have been erased by people who conquered them, do you really think they didn’t re-write the history of who these people are? Has this person never followed scientific advancement, where people test and validate that results can be reproduced?

    Humans are absolutely gonna human. The author is right to realize that a single source holds a lot less factual accuracy than many sources, but it’s catastrophizing to call it worthless and it ignores how additional information can add to or detract from a particular claim- so long as we examine the biases present in the creation of said information resources.


  • I’ve personally found it’s best to just directly ask questions when people say things that are cruel, come from a place of contempt or otherwise trying to start conflict. “Are you saying x?” but in much clearer words is a great way to get people to reveal their true nature. There is no need to be charitable if you’ve asked them and they don’t back off or they agree with whatever terrible sentiment you just asked whether they held. Generally speaking people who aren’t malicious will not only back off on what they’re saying but they’ll put in extra work to clear up any confusion - if someone doesn’t bother to clear up any confusion around some perceived hate or negativity, it can be a more subtle signal they aren’t acting in good faith.

    If they do back off but only as a means to try and bait you (such as refusing to elaborate or by distracting), they’ll invariably continue to push boundaries or make other masked statements. If you stick to that same strategy and you need to ask for clarification three times and they keep pushing in the same direction, I’d say it’s safe to move on at that point.

    As an aside - It’s usually much more effective to feel sad for them than it is to be angry or direct. But honestly, it’s better to simply not engage. Most of these folks are hurting in some way, and they’re looking to offload the emotional labor to others, or to quickly feel good about themselves by putting others down. Engaging just reinforces the behavior and frankly just wastes your time, because it’s not about the subject they’re talking about… it’s about managing their emotions.






  • This isn’t just about GPT, of note in the article, one example:

    The AI assistant conducted a Breast Imaging Reporting and Data System (BI-RADS) assessment on each scan. Researchers knew beforehand which mammograms had cancer but set up the AI to provide an incorrect answer for a subset of the scans. When the AI provided an incorrect result, researchers found inexperienced and moderately experienced radiologists dropped their cancer-detecting accuracy from around 80% to about 22%. Very experienced radiologists’ accuracy dropped from nearly 80% to 45%.

    In this case, researchers manually spoiled the results of a non-generative AI designed to highlight areas of interest. Being presented with incorrect information reduced the accuracy of the radiologist. This kind of bias/issue is important to highlight and is of critical importance when we talk about when and how to ethically introduce any form of computerized assistance in healthcare.











  • to make a long story short: getting our money out of the old collective and into the new one was actually much more of a mess than we thought

    For anyone curious about the details, I had to step in to help ensure this actually happened because, well, tax law is complicated and none of us are experts. Ultimately our current financial host OCE had to bring on a US-based company in order to allow a transfer of tax-exempt funding. On top of that, we had to submit an application and enter an agreement with this partner company so that they could open a bank account on our behalf because having a bank account and agreement with OCE was not enough. What a headache!

    Thanks for everyone who set up donations on OCE as soon as we transitioned, that was actually super helpful! For the rest of you who used to donate and were waiting for us to be fully transitioned over to OCE to restart your donations, you are free to do so now, and given our current deficit it would be most appreciated!



  • It’s FUCKING OBVIOUS

    What is obvious to you is not always obvious to others. There are already countless examples of AI being used to do things like sort through applicants for jobs, who gets audited for child protective services, and who can get a visa for a country.

    But it’s also more insidious than that, because the far reaching implications of this bias often cannot be predicted. For example, excluding all gender data from training ended up making sexism worse in this real world example of financial lending assisted by AI and the same was true for apple’s credit card and we even have full-blown articles showing how the removal of data can actually reinforce bias indicating that it’s not just what material is used to train the model but what data is not used or explicitly removed.

    This is so much more complicated than “this is obvious” and there’s a lot of signs pointing towards the need for regulation around AI and ML models being used in places it really matters, such as decision making, until we understand it a lot better.