My cousin’s friend knows a guy if you are interested…
My cousin’s friend knows a guy if you are interested…
Errrm,
Ma’am*
Sorry you just have a very raspy voice.
You, sir, are a genius
Tomato, tomato translates hilariously poorly in text, I’m dying
It’s easier to read after a few pints of vodka
orders delivery
falls asleep
complains about “Bullshit fucking app”
(╯°□°)╯︵ ┻━┻
What for those of us who are dumb but not curious?
I’m not really following you but I think we might be on similar paths. I’m just shooting in absolute darkness so don’t hold much weight to my guess.
What makes transformers brilliant is the attention mechanism. That is brilliant in turn because it’s dynamic, depending on your query (also some other stuff). This allows the transformer to be able to distinguish between bat and bat, the animal and the stick.
You know what I bet they didn’t do in testing or training? A nonsensical query that contains thousands of one word, repeating.
So my guess is simply that this query took the model so far out of its training space that the model weights have no ability to control the output in a reasonable way.
As for why it would output training data and not random nonsense? That’s a weak point in my understanding and I can only say “luck,” which is, of course, a way of saying I have no clue.
I laughed way too hard at this, many thanks.
Not many. I prefer smaller trackers though. If you see a lot of popular torrents on larger trackers, you’ll have a bunch of concurrent active seeds.
If you permaseed you don’t need to know individual tracker seeing requirements.
I would absolutely believe it, makes a lot of sense.
I’m a big fan of Anthropic, but I will admit that in terms of quality, they lag GPT4.
I highly recommend giving them a go.
This always makes me laugh, thank you
You definitely have the right username for the job
Rose anvil?
I meant “what’s confusing about some people disliking federation?”
And I agree with you, unnecessary complexity is one of those reasons.
What’s confusing about it?
🤣