Profile pic is from Jason Box, depicting a projection of Arctic warming to the year 2100 based on current trends.

  • 0 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: March 3rd, 2024

help-circle




  • Lots of attacks on Gen Z here, some points valid about the education that they were given from the older generations (yet it’s their fault somehow). Good thing none of the other generations are being fooled by AI marketing tactics, right?

    The debate on consciousness is one we should be having, even if LLMs themselves aren’t really there. If you’re new to the discussion, look up AI safety and the alignment problem. Then realize that while people think it’s about preparing for a true AGI with something akin to consciousness and the dangers that we could face, we have have alignment problems without an artificial intelligence. If we think a machine (or even a person) is doing things because of the same reasons we want them done, and they aren’t but we can’t tell that, that’s an alignment problem. Everything’s fine until they follow their goals and the goals suddenly line up differently than ours. And the dilemma is - there’s not any good solutions.

    But back to the topic. All this is not the fault of Gen Z. We built this world the way it is and raised them to be gullible and dependent on technology. Using them as a scapegoat (those dumb kids) is ignoring our own failures.



  • AI certainly can be a tool to combat it. Such things should have been hardcoded within these neural nets to have some type of watermarking way before it became a problem, but now as far as it’s gone and in the open, it’s a bit too late for that remedy.

    But when tools are put out to detect what is and isn’t AI, trust will develop in THOSE AI systems, and then they could be manipulated to claim actual real events aren’t true. The real problem is that the humans in all of this from the beginning are losing their ability to critically examine and verify what they’re being shown. I.e., people are gullible, always have been to a point, but are at the height now of believing anything they’re told without question.



  • Ollama.com is another method of self hosting. Figuring out which model type and size for what equipment you have is key, but it’s easy to swap out. That’s just an LLM, where you go from there depends on how deep you want to get into the code. An LLM by itself can work, it’s just limited. Most of the addons you see are extra things to give memory, speech, avatars, and other extras to improve the experience and abilities. Or you can program a lot of that yourself if you know Python. But as others have said, the more you try to get out, the more robust a system you’ll need, which is why you find the best ones online in cloud format. But if you’re okay with slower responses and lower features, self hosting is totally doable, and you can do what you want, especially if you get one of the “Jailbroke” models that has had some of the safety limits modified out of them to some degree.

    Also as mentioned, be careful not to get sucked in. Even a local model can be convincing enough sometimes to fool someone wanting to see things. Lots of people recognize that danger, but then belittle people who are looking for help in that direction (while marketing realizes the potential profits and tries very hard to sell it to the same people).


  • I haven’t seen the sequel to it yet, and sort of was fine leaving it open-ended. I can see how there are dark parts to that episode, mainly from sticking with Dark Mirror’s premise that tech can be used badly. It also paints a not-so-great picture of the real people, hero worship, maybe the gaming industry? The sim copies seem to make out the best of anyone. Definitely a favorite, if I’d rate it on dark vs. positive, it’s 8/10 positive, whereas San Junipero was a 10/10 in the end. Actually San was a 9/10, as it did show that some used the tech there as escape and didn’t grow like the main characters finally did.




  • You’re correct, it’s more likely that humans will use a lesser version (eg. an LLM) to screw things up, assuming it’s doing what it says it’s doing while it’s not. That’s why I say that AI safety applies to any of this, not just a hypothetical AGI. But again, it doesn’t seem to matter, we’re just going to go full throttle and get what we get.


  • Rhaedas@fedia.iotoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    arrow-up
    0
    arrow-down
    2
    ·
    2 months ago

    I know there’s some that roll their eyes at the mention of AI safety, saying that what we have isn’t AGI and won’t become it. That’s true, but that doesn’t eliminate the possibilities of something in the future. And between this and China’s laxness of trying everything to be first, if we get to that point, we’ll find out the hard way who was right.

    The laughable part is that the safeguards put up by Biden’s admin were very vague and lacking of anything anyway. But that doesn’t matter now.





  • Humans are terrible. The human eyes and brain are good at detecting certain things though that allow a reaction where computer vision, especially only using one method of detection, fails often. There are times when an automated system will prevent a problem before a human could even see it. So far neither is the clear winner, human driving just has a legacy that automation has to beat by a great length and not just be good enough.

    On the topic of human drivers, I think most on the road drive reactively and not based on prediction and anticipation. Given the speed and possible detection methods, a well designed automated system should be excelling at this. It costs more and it more complex to design such a thing, so we’re getting the bare bones of the best minimum tech can give us right now, which again is not a replacement for all cases.




OSZAR »