

Something happens Americanly in America
Americans: “What are we, a bunch of üntermench asians???”
Something happens Americanly in America
Americans: “What are we, a bunch of üntermench asians???”
But ERP is not a cool buzzword, hence it can fuck off we’re in 2025
You’re misunderstanding tool use, the LLM only queries something to be done then the actual system returns the result. You can also summarize the result or something but hallucinations in that workload are remarkably low (however without tuning they can drop important information from the response)
The place where it can hallucinate is generating steps for your natural language query, or the entry stage. That’s why you need to safeguard like your ass depends on it. (Which it does, if your boss is stupid enough)
The model ISN’T outputing the letters individually, binary models (as I mentioned) do; not transformers.
The model output is more like Strawberry <S-T-R><A-W-B>
<S-T-R-A-W-B><E-R-R>
<S-T-R-A-W-B-E-R-R-Y>
Tokens can be a letter, part of a word, any single lexeme, any word, or even multiple words (“let be”)
Okay I did a shit job demonstrating the time axis. The model doesn’t know the underlying letters of the previous tokens and this processes is going forward in time
No, this literally is the explanation. The model understands the concept of “Strawberry”, It can output from the model (and that itself is very complicated) in English as Strawberry, jn Persian as توت فرنگی and so on.
But the model does not understand how many Rs exist in Strawberry or how many ت exist in توت فرنگی
Broadcom management deserve gulag
For usage like that you’d wire an LLM into a tool use workflow with whatever accounting software you have. The LLM would make queries to the rigid, non-hallucinating accounting system.
I still don’t think it would be anywhere close to a good idea because you’d need a lot of safeguards and also fuck your accounting and you’ll have some unpleasant meetings with the local equivalent of the IRS.
This is because auto regressive LLMs work on high level “Tokens”. There are LLM experiments which can access byte information, to correctly answer such questions.
Also, they don’t want to support you omegalul do you really think call centers are hired to give a fuck about you? this is intentional
The infrastructure can be duplicated and studied tho. Would be cool if a little dreamy
Watch someone reverse the thing into turbocharged WINE
Industrial emulation is easy to do, a sandboxed and controlled VM won’t die from hardware faults like a hunk of shit from 1993
Also there are NEW computers made specifically for this particular purpose, they even have ISA buses and shit
I don’t understand why lemmy is living in la la land, the moment you go against the narrative you’re brigaded to shit
Yes, y’all do be in fact wrong
Bonus: IBM sells emulation packages for migration to new architectures. IBM probably knows better than the lot of us.
Removed by mod
Instead of using old proprietary shit you could use Linux or *BSD with a vintage desktop environment and have a blast
Something I noticed is that basic users (someone using a fucking 30 y/o OS is definitely one) have an easier time with *nix because most “technical” people are overfitted and brainwashed to the Micro$uck ecosystem
Meta’s recent LLAMA models are a disaster and worse they only masquerade as open models. Meanwhile Europe has it’s own AI research centers like Mistral who make really good models under the Apache 2 license.
We got baited by piece of shit journos
It’s a local model. It doesn’t send data over.
If they want call data they can buy it straight from service providers anyway
deleted by creator
This is unironically a technique for catching LLM errors and also for speeding up generation.
For example in speculative decoding or mixture of experts architectures these kind of setups are used.
Fuck the kids, their piece of shit parents can pull their fucking weight if they have a problem
?!!? Before genAI it was hires human manipulators. Ypur argument doesn’t exist. We cannot call edison a witch and go back in caves because new tech creates new threat landscapes.
Humanity adapts to survive and survives to adapt. We’ll figure some shit out
My man, this is literally what they just did. This isn’t an strawman. Atleast google the meaning of your catchphrase ffs