It works with plugin juste like obsidian, so if their implémentation is not gold enough, you can always find a gramarly plugin.
It works with plugin juste like obsidian, so if their implémentation is not gold enough, you can always find a gramarly plugin.
It does not work exactly like obsidian as it is an outliner. I use both on the same vault and logseq is slower on larger vault.
It works pretty well. You can create a good dataset for a fraction of the effort and price it would have required to do it by hand. The quality is similar. You just have to review each prompt so you don’t train your model on bad data.
Do you use comfyui ?
You are easier to track with Adnauseum
Being able to run benchmarks doesn’t make it is a great experience to use unfortunately. 3/4 of applications don’t run or have bugs that the devs don’t want to fix.
Windows is not fine with ARM, which can be a turnoff for some.
Llama models tuned for conversation are pretty good at it. ChatGPT also was before getting nerfed a million time.
Even dumber than that, when their activation method fail, the support uses massgrev to install windows on costumer pc
Some ips are shadowbanned, if you are using a VPN/proxy it might be the reason.
JPEG-XL support is being tested in firefox nightly
https://tiz-cycling-live.io/livestream.php
Be sure to use an adblocker, some times the stream get taken down and you have to wait 1/2 min for them to repost one.
I think that for most people linux is the most simple OS to use, switched my parents and sister computer to Linux Mint and they don’t ask me to help them with windows changing their browser or moving their icons every two weeks. Though if you are trying to do anything more than web browsing, document editing and listening to music, you will have to learn how some of the os works.
Yes, but it will take some learning time
They have a github where you can see all the changes that are being made. https://github.com/privacyguides/privacyguides.org/releases
Llama 2 now uses a license that allows for commercial use.
2/3 of the people living in the Saudi Emirate are immigrants whose passports have been confiscated, they work in factory, construction sites, oil pit, and all other kind of manual jobs. Meanwhile the Saudi citizens occupy all the well paid job that require education, immigrants can’t apply to those. If they didn’t use forced labor, there simply wouldn’t be enough people in the country to occupy all the jobs. Their economy could not be as good as it is right now.
Muslim and christian minorities are forced to work in camps to “re-educate” them to be good chinese citizen.
The best way to run a Llama model locally is using Text generation web UI, the model will most likely be quantized to 4/5bit GGML / GPTQ today, which will make it possible to run on a “normal” computer.
Phind might make it accessible on their website soon, but it doesn’t seem to be the case yet.
EDIT : Quantized version are available thanks to TheBloke
It is already here, half of the article thumbnails are already AI generated.