• 0 Posts
  • 34 Comments
Joined 1 year ago
cake
Cake day: June 26th, 2023

help-circle





  • The word “have” is used in two different ways. One way is to own or hold something, so if I’m holding a pencil, I have it. But another way is as a way so signal different tenses (as in grammatical tense) so you can say “I shouldn’t have done it” or “they have tried it before.” The contraction “'ve” is only used for tense, but not to own something. So, the phrase “they’ve it” is grammatically incorrect.




  • Let’s play a little game, then. We bothe give each other descriptions of the projects we made, and we try to make the project based on what we can get out of ChatGPT? We send each other the chat log after a week or something. I’ll start: the hierarchical multiscale LSTM is a stacked LSTM where the layer below returns a boundary state which will cause the layer above it to update, if it’s true. the final layer is another LSTM that takes the hidden state from every layer, and returns a final hidden state as an embedding of the whole input sequence.

    I can’t do this myself, because that would break OpenAI’s terms of service, but if you make a model that won’t develop I to anything, that’s fine. Now, what does your framework do?

    Here’s the paper I referenced while implementing it: https://arxiv.org/abs/1807.03595


  • Sorry that my personal experience with ChatGPT is ‘wrong.’ if you feel the need to insult everyone who disagrees with you, that seems like a better indication of your ability to communicate than mine. Furthermore, I think we’re talking about different levels of novelty. You haven’t told me the exact nature of the framework you developed, but the things I’ve tried to use ChatGPT for never turn out too well. I do a lot of ML research, and ChatGPT simply doesn’t have the flexibility to help. I was implementing a hierarchical multiscale LSTM, and no matter what I tried ChatGPT kept getting mixed up and implementing more popular models. ChatGPT, due to the way it learns, can only reliably interpolate between the excerpts of text it’s been trained on. So I don’t doubt ChatGPT was useful for designing your framework, since it is likely similar to other existing frameworks, but for my needs it simply does not work.










  • It’s less about the fallibility of humans, and more mathematical than that. A person ability to acquire wealth is proportional to the current wealth they have. (And I’m not just talking about money, I’m taking about resources and power) As a result, those with a tendency to act nastier have an advantage in gaining wealth. This same issue is present in a communist economy, because while communism eschues the concept of money, it does not reject the idea of unequal power. Even some super intelligent AI wouldn’t be able to fix this, as long as it was forced to give humanity basic freedoms and follow communist ideals.

    Honestly, this whole communism vs capitalism debate is beneficial to the powers that be, since neither system actually tries to prevent the acquisition of power or the abuse of it.