• 0 Posts
  • 5 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • Definitely. It has some alignment, but it won’t straight up refuse to do anything. It will sometimes add notes saying that what you’ve asked is kinda maybe against the law, but will produce a great response regardless. It’s a 70b, so running it locally is kind of a challenge, but for those who can run it - there is simply no other LLM that you can run at home that gets even close to it. It follows instructions amazingly, it’s very consistent and barely hallucinates. There is some special mistral sauce in it for sure, even if it’s “just” a llama2-70b.


  • The info in this thread is mostly incorrect - error has nothing to do with the SD card you plug in the server.

    This error happens because HP had a bug in earlier versions of iLo where flash memory wear levelling was not enabled. It results in a failed flash chip unless iLo was updated early on.

    The SD card is Embedded - it’s NOT the one that you plug in, but rather it’s soldered down on the motherboard. You can try formatting it (there is HP support advisory on this, requires sending special XML) but chances of bringing it back to life are slim. Out of 10+ machines with these symptoms I’ve seen only two were alright after flash format. Proper fix would be to desolder and replace the chip on the motherboard…

    Part number for the chip: SDIN7DP2-4G

    Here’s the link to support advisory: https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c04996097



  • There is a bit of a conundrum here: in order to run a model that is any good in coding you want it to have a lot of parameters (the more the better) but also since it’s code and not some spoken language - precision matters here. Home hardware like 3090 is able to run ~30b models, but there is a catch - it just fits and only in quantized form = with 4x worse precision typically. Unless we see some breakthrough here that makes inference of huge models possible at full precision - the hosted AI will always be better for coding. Not saying such breakthrough is impossible though - quite the opposite in my opinion.