• voluble@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    Interesting. I’m curious to know more about what you think of training datasets. Seems like they could be described as a stored representation of reality that maybe checks the boxes you laid out. It’s a very different structure of representation than what we have as animals, but I’m not sure it can be brushed off as trivial. The way an AI interacts with a training dataset is mechanistic, but as you describe, human worldviews can be described in mechanistic terms as well (I do X because I believe Y).

    You haven’t said it, so I might be wrong, but are you pointing to freewill and imagination as somehow tied to intelligence in some necessary way?

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      I think worldview is all about simulation and maintaining state, it’s not really about making associations, but rather maintaining some kind of up to date and imaginary state that you can simulate on top of, to represent the world. I think it needs to be a very dynamic thing which is a pretty different paradigm to the ML training methodology.

      Yes, I view these things as foundational to freewill and imagination, but I’m trying to think more low level than that. Simulation facilities imagination and reasoning facilities motivation which facilities free will.

      Are those things necessary for intelligence? Well it depends on your definition and everyone has a different definition ranging from reciting information to full blown consciousness. Personally, I don’t really care about coming up with a rigid definition for it, it’s just a word, I care more about the attributes. I think LLMs are a good knowledge engine and knowledge is a component of intelligence.