Interesting experiment! Made me think of the book “how we learn: the new science of education and the brain”
Interesting experiment! Made me think of the book “how we learn: the new science of education and the brain”
Since the work was done with the military, it will have some applications there for sure. However, I’m more looking forward to application in infrastructure quality evaluation and robotics :)
Poke
Before selling the 5, they should make it so one can buy the 4 x)
Thanks for sharing! I’ll have a look later it sounds great!
I think it’s nothing particularly weird, I’ve always assumed that there are spores in the soil and it happens when it gets a bit too much water, no? I don’t think they need to worry :)
Any way to bypass the paywall for this article :)?
I read that paper and it’s really really incredible. The results are super impressive.
That’s a fairly terrifying scenario. Like putting open doors into our brains
Thank you so much! I was also advised to use dish soap with water :). Is that good?
Thanks! I know what to google now!
I’m sorry my answer triggered something; it wasn’t meant to. In my experience people here are nice and friendly, so I hope you manage to feel safe and welcomed.
What 😅? Ok I guess 😅
I was just saying it’s hard to make a company spend more “just” for the environment, but it’s still important to do.
I’m trying at mine and it goes exactly as you would think…
Thanks for being so responsive!
It seems that the problem is fixed now but the fix is not yet in upstream (should be soon).
In the study, physicians found more inaccuracies and irrelevant information in answers provided by Google’s Med-PaLM and Med-PalM 2 than those of other doctors.
It’s a bit like every other use of AI IMO: the challenge is to make people understand that it’s a fancy information retrieval system and thus it is flowed and not to be blindly trusted. There was study on the use of professional settings that showed that model such as ChatGPT helped low performers much more than high performers (which had barely any improvement thanks to the model). If this model is used to help less competent doctors (without judgement, they could be beginning their careers) while maintaining a certain degree of doubt, then that could be very good.
However, the ramification of a wrong diagnosis from the AI is quite scary, especially considering that AI tend to repeat the biais of their training dataset, and even curated data is not exempt from biais
I would advise not training your own model but instead use tools like langchain and chroma, in combination with a open model like gpt4all or falcon :).
So in general explore langchain!
merci j’avais pas réaliser ça!
In the only loaf like picture I have it didn’t find my cat because it’s only her butt :(