

Have we got images of the font they’re using?
Profile pic is from Jason Box, depicting a projection of Arctic warming to the year 2100 based on current trends.
Have we got images of the font they’re using?
Figures, I just purchased an account last week. That’s fine, it’s not that expensive anyway and hopefully this helps more people decide to move from Google, like I’m in the process of doing.
That’s a bit of a reach. We should have stayed in the trees though, but the trees started disappearing and we had to change.
Lots of attacks on Gen Z here, some points valid about the education that they were given from the older generations (yet it’s their fault somehow). Good thing none of the other generations are being fooled by AI marketing tactics, right?
The debate on consciousness is one we should be having, even if LLMs themselves aren’t really there. If you’re new to the discussion, look up AI safety and the alignment problem. Then realize that while people think it’s about preparing for a true AGI with something akin to consciousness and the dangers that we could face, we have have alignment problems without an artificial intelligence. If we think a machine (or even a person) is doing things because of the same reasons we want them done, and they aren’t but we can’t tell that, that’s an alignment problem. Everything’s fine until they follow their goals and the goals suddenly line up differently than ours. And the dilemma is - there’s not any good solutions.
But back to the topic. All this is not the fault of Gen Z. We built this world the way it is and raised them to be gullible and dependent on technology. Using them as a scapegoat (those dumb kids) is ignoring our own failures.
All companies will eventually try to become monopolies if they get large enough. It’s the nature of capitalism, to do whatever it takes for the bottom line of profit and company growth. That’s why regulations are a good thing, to put limits where a company alone will never do.
AI certainly can be a tool to combat it. Such things should have been hardcoded within these neural nets to have some type of watermarking way before it became a problem, but now as far as it’s gone and in the open, it’s a bit too late for that remedy.
But when tools are put out to detect what is and isn’t AI, trust will develop in THOSE AI systems, and then they could be manipulated to claim actual real events aren’t true. The real problem is that the humans in all of this from the beginning are losing their ability to critically examine and verify what they’re being shown. I.e., people are gullible, always have been to a point, but are at the height now of believing anything they’re told without question.
A company would look at this and determine not that LLMs might have something going on that would be bad for long term business. They would see the bigger net dollar amount and figure that they just had to calculate when to “reset” the LLM. It’s just another IT problem where the solution isn’t to address the problem but to find a workaround that reduces cost while continuing operations.
Ollama.com is another method of self hosting. Figuring out which model type and size for what equipment you have is key, but it’s easy to swap out. That’s just an LLM, where you go from there depends on how deep you want to get into the code. An LLM by itself can work, it’s just limited. Most of the addons you see are extra things to give memory, speech, avatars, and other extras to improve the experience and abilities. Or you can program a lot of that yourself if you know Python. But as others have said, the more you try to get out, the more robust a system you’ll need, which is why you find the best ones online in cloud format. But if you’re okay with slower responses and lower features, self hosting is totally doable, and you can do what you want, especially if you get one of the “Jailbroke” models that has had some of the safety limits modified out of them to some degree.
Also as mentioned, be careful not to get sucked in. Even a local model can be convincing enough sometimes to fool someone wanting to see things. Lots of people recognize that danger, but then belittle people who are looking for help in that direction (while marketing realizes the potential profits and tries very hard to sell it to the same people).
I haven’t seen the sequel to it yet, and sort of was fine leaving it open-ended. I can see how there are dark parts to that episode, mainly from sticking with Dark Mirror’s premise that tech can be used badly. It also paints a not-so-great picture of the real people, hero worship, maybe the gaming industry? The sim copies seem to make out the best of anyone. Definitely a favorite, if I’d rate it on dark vs. positive, it’s 8/10 positive, whereas San Junipero was a 10/10 in the end. Actually San was a 9/10, as it did show that some used the tech there as escape and didn’t grow like the main characters finally did.
This felt like reading about someone who complains that The Daily Show doesn’t have enough positive news stories on it. Dark Mirror fills a niche that people look for, it’s not something that making people think a certain way.
And no mention at all of San Junipero. I guess that would break selling it as pessimism porn when there’s examples otherwise.
Right? It’s the biggest circus of all. Do you really want all those clowns looking for something else to do?
You’re correct, it’s more likely that humans will use a lesser version (eg. an LLM) to screw things up, assuming it’s doing what it says it’s doing while it’s not. That’s why I say that AI safety applies to any of this, not just a hypothetical AGI. But again, it doesn’t seem to matter, we’re just going to go full throttle and get what we get.
I know there’s some that roll their eyes at the mention of AI safety, saying that what we have isn’t AGI and won’t become it. That’s true, but that doesn’t eliminate the possibilities of something in the future. And between this and China’s laxness of trying everything to be first, if we get to that point, we’ll find out the hard way who was right.
The laughable part is that the safeguards put up by Biden’s admin were very vague and lacking of anything anyway. But that doesn’t matter now.
NASA wanted to do a lot more after Apollo but was told absolutely no way by Congress. The shuttles we got were a leftover fragment of the grand plan that would definitely have gotten us into the category of a spacefaring civilization. Assuming all went well, of course. It’s possible we would have had Challenger/Columbia level disasters even if the money was there. Ad astra per aspera.
searching Walmart website
Not yet.
The real market if this does hit actual shelves is whoever creates adapters for existing products.
Humans are terrible. The human eyes and brain are good at detecting certain things though that allow a reaction where computer vision, especially only using one method of detection, fails often. There are times when an automated system will prevent a problem before a human could even see it. So far neither is the clear winner, human driving just has a legacy that automation has to beat by a great length and not just be good enough.
On the topic of human drivers, I think most on the road drive reactively and not based on prediction and anticipation. Given the speed and possible detection methods, a well designed automated system should be excelling at this. It costs more and it more complex to design such a thing, so we’re getting the bare bones of the best minimum tech can give us right now, which again is not a replacement for all cases.
Real Genius
“This? This is ice. This is what happens to water when it gets too cold. This? This is Kent. This is what happens to people when they get too sexually frustrated.”
Legend of so many films.
To phrase a Monty Python firing squad skit, “You MISSED???”