The UI looks mostly like Firefox IMO. Critically, it’s just as slow as Firefox.
The UI looks mostly like Firefox IMO. Critically, it’s just as slow as Firefox.
Also he steals her child
That’s my secret Cap… I’m always orgasming
Back when Reddit was good the ads used to be like regular posts with a comment section, so you could actually talk about the product and exchange experiences, and the advertiser would sometimes respond. I found it to be a transparent and valuable way of advertising, and I actually liked the ads back then because there was a social and learning aspect to them. But of course they got rid of that, supposedly because what if somebody says something bad. They don’t understand that the lack of honesty and dialogue is what makes people loathe ads.
Also never connect it to the Internet
Why did you hear it at all?
It would depend on the jurisdiction obviously, but I believe most of those points are irrelevant.
As far as signing goes, I know that in my country (Sweden) a verbal agreement is legally just as good as a written signature - it’s just harder to prove in court. Contract law typically recognizes the ability to agree electronically, and in EULAs the agreement is made by using the software. Again, YMMV by country. My original claim that they’re typically illegal was about the actual terms of the agreement, which often conflict with written law. For example in the EU you have a right to reverse engineer products for the sake of interoperability, and no EULA can override that right.
In Sweden there’s also a law to allow you to make personal backups of media and software, and you’re permitted to give copies to your friends and family. In fact, there’s a state-regulated “private copying levy” designed to compensate content owners for their monetary loss caused by this copying. Which really infuriates me considering the lengths they go to to prevent me from doing the copying that I’m paying them for the right to do.
…today?
As far as I can tell that’s not at all the case in Sweden where I live, in fact geriatric or slow drivers are very rarely involved in accidents. Intoxicated drivers are extremely rare compared to most other countries. See e.g. https://www.itf-oecd.org/sites/default/files/sweden-road-safety.pdf which says “Inappropriate speed is one of the leading causes of road crashes”. You can find more research saying similar things on Google, e.g. that for every 10 km/h increase, the risk of an accident increases by 33 percent.
But it’s not just a matter of having a high overall speed. It’s also how quickly you accelerate / break. BMW/Audio/Tesla drivers have a high capacity for acceleration and they use it e.g. to overtake in situations when others wouldn’t. I suspect the cause/effect is the other way around though: if you’re a reckless driver who doesn’t care about safety, you’re more likely to choose a car that has a lot of power.
I’m sure that’s one contributing factor, but I’d bet that the biggest issue is that the car is made to go fast. People who drive faster end up in more accidents. Hence why Audi / BMW drivers are also stereotypically bad drivers - they are both brands with a high-acceleration profile.
Indeed and it seems attainable now, if it weren’t for the expensive hardware and massive energy required for general pre-trained transformers. Don’t want my car to call home just to run a neural network on Azure, it needs to run locally.
Especially when the buttons move around in the GUI after an update so you accidentally press the wrong ones, or end up having to search the menus while driving.
Perhaps this could change when we have mainstream tactile displays, but until then buttons will always be better.
An upvote would have sufficed to communicate as much
deleted by creator
Where I live, they’re almost always in contradiction with the laws.
Semmelweis’s hypothesis is testable. None of what you mentioned is.
you’re mom
I’m a long-time software developer who at one point spent a lot of time on a software synth as a hobby project (never finished it as I realized it had fundamental design flaws). I’m also interested in making music (but still suck at it), follow various producers on YouTube and dabble with Ableton. Here are some things that puzzle me:
Latency seems inevitable, regardless of how fast your CPU or code is. Many algorithms simply require a certain window of input data before they can produce something. For example, an FFT with a window size of 2048 requires 2048 samples (~50 milliseconds) before it can react. Chain multiple such filters together and it adds up. In my hobby project I wanted to make a “reverse reverb” module (buffer data, reverse it, apply reverb, then reverse audio again to get an effect as if the sound is “arriving”) and I could never wrap my head around how to do it. It could potentially add a latency of tens of seconds. How can we deal with this in the audio pipeline? It seems like for prerecorded or generated audio, it should be possible to consume data ahead of time to make the output come out at the right time. But all of the modules need to be synchronized so e.g. a drum comes out at the right time along all paths.
Typically analog synths have lower latency, but I don’t understand why. Aren’t they theoretically subject to the same limitations as a digital synth? Even an analog filter would need some kind of buffer to determine frequency. It’s like Heisenberg’s uncertainty principle but for sound. So how does that work, and how can we replicate the low latency of analog synths in software synths?
I lack an intuition about sound synthesis and it all seems very magical, so I wish somebody would help me untangle the relationship between what I hear and what the algorithm does. I mean it’s easy to look up algorithms for producing audio, but I don’t know how to apply those algorithms to incrementally work my way toward the sound I’m looking for in my head. As a developer I have an analytical mindset, and most producers I follow seem to go more on feeling (which is difficult to me). I have a hunch that a lot of what they talk about is just placebo, but I don’t know how I would test that assertion. For example, there are people who compare the different sounds of Ableton’s Operator and Serum, as if they are different beasts. But both are FM synths; it’s the same maths behind them. So why would they have different sound? With all the FM synths that are out there, what are the things that actually separate them to produce different “feeling”?
In fact, speaking of FM synths, they are one of the biggest mysteries for me. I know what they do mathematically, but I need help understanding why someone chose to build a synth in this particular way and how they tame it to get the sound they want. It just seems like a really chaotic way to work for me, only slightly better than a random number generator.
Perhaps it would be interesting as a case study to try to replicate some of these commercial software synths by stitching together basic algorithms covered in the manual.
To do that you first need to choose a calendar and a time zone, then convert to that representation. It can be done, but you need a good implementation that understands the entire history of what has transpired w.r.t. to date conventions in that location and culture. For timestamps in the future it is impossible to do correctly, since you can’t know how date conventions will change in the future.
However, I should add that as far as mathematical operations go, calculating the number of months between t1 and t2 is an entirely different thing than the duration of time that passed between those timestamps. Even if it is expressed similarly in the English language, semantically it’s something else. It’s like asking “how many kilometers did your car go” vs “how many houses did the car pass on the way”.
“Leads to a bad search experience for users” is Google speak for “they are seeing ads served by other companies than Google”