ebay is very international, and is also by far the greatest site for second-hand stuff in most European countries. I normally buy my used drives there.
Safety Engineer, Dad, Husband, Pilot, Musician. Not necessarily in that order.
Ingenieur für funktionale Sicherheit, Vater, Ehemann, Pilot, Musiker. Nicht notwendigerweise in dieser Reihenfolge.
ebay is very international, and is also by far the greatest site for second-hand stuff in most European countries. I normally buy my used drives there.
mixing drive models is certainly not going to do any harm
It may, performance-wise, but usually not enough to matter for a small self-hosting servers.
Sure, SCSI disks will show their defective list (“primary defects”, as delivered by the factory, and grown defects, accumulated during use), and they all have a couple hundred primary defects. But I don’t see why that would affect the reported geometry, given that it is fictional, anway. And all disks have enough spare tracks to accommodate for the defects, and offer the specified full number of total sectors, even for long list of grown defects. Incidentally, all the 4TB disks are still “perfect” in that they have no grown defects.
And yes, ever since LBA, nobody has used sectors and cylinders for anything.
I’m not touching that post again. But a small rant about typesetting in lemmy: It seems there is no way whatsoever to put angle brackets in a “code” section. In an overzealous attempt to prevent HTML injection, everything in angle brackets is just removed when posting (although it remains there in preview). In normal text, you can use “<”, but not inside “code” segments, where it will be retained verbatim.
If you’re as paranoid as me about data integrity, SAS drives on a host adapter card in “Initiator Target” (IT) mode with write-cache on the disks disabled is the safest. It will degrade performance when writing many small files concurrently, but not as badly as with SATA drives (that’s for spinning disks, of course, not SSD). With a good error-correcting redundant system such as ZFS you can probably get away with enabled write cache in most cases. Until you can’t.
RAID is generally a good thing but don’t get complacent, follow the 3-2-1 method
To expand on that: Redundant drive setup and backups serve completely different purposes. The only overlap is in case of a single disk failure, where RAID (or similar) may save the data.
Redundancy is all about reducing downtime in case of single hardware failures. Backups not only protect you from data loss in case of multiple simultaneous failures, but also from accidental deletion. Failures that require restoration of data almost always involve downtime. In short: You always need backups (unless it’s strictly a local cache, and easily recreatable), but if you want high availability, redundancy may help.
3-2-1-rule for backups, in case you’re unfamiliar: 3 copies of important data, on 2 different media, with 1 off-site.
That’s a very narrow-minded view. I thought the same thing when the iPad was new. But I changed my mind.
Sitting on the sofa and watching movies or reading news is a good application, a laptop is too clunky for that, and a phone screen is too small.
Also use as an air-navigation device (not only) light aircraft, and replacement for paper charts in airline operations. There are many legitimate uses where tablets are exactly what you want. If it’s not for you, fine.
Thanks.
Thanks. Right now I’m away from the machine so can’t look, but I’ll keep an eye open for a T420 mainboard and a second CPU, then. It’ll still be a decent machine, I think, with two E5-2470 V2. DDR3 ECC-RAM is also dirt cheap these days.
Clearly you neither read my post nor looked into what the air baffle in the T320 actually looks like. So whats your point?
It’s much more than a fan shroud. It’s a baffle specifically designed to guide cooling air over the CPU heatsinks and the RAM modules. This kind of airflow design is very common in servers. I wouldn’t trust it without, especially since the CPU heatsinks have no dedicated fans, but rely on the aerodynamic functioning of the baffle.
And yes, I know they are very similar, in fact I am quite (but not absolutely) certain that they are identical except for the actual second CPU socket. It’s almost as if you didn’t read my post. Even the soldering points for the second CPU socket are there in the single-CPU T320. They certainly won’t have different PSU connectors. They even share part numbers for the case.
I’d have to check the baffle shape again. But thanks for the insight.
I don’t think there’s anything intrinsically wrong, but far as I can see you are using only a single disk for the zfs pool, which will give you integrity checks (know when something is corrupted), but no way to fix it.
Since this is, by today’s standards, a tiny disk at 100G, I assume this is just a test setup? I’m not sure zfs is particularly well suited for virtual machines, I think it is better to have the host handle the physical data integrity by having the disk image on a zfs filesystem, or giving the VM a zfs volume (block device) directly.
One urgent thingis that the EU follow the UK in abandoning the ill-conceived “client-side scanning”, aka Chat-Control.
Although I’m a bit late, it is worth mentioning, that the Tu-22M3 is not just a variant of the Tu-22. The Tu-22 was a completely different aircraft, and the Tu-22M retained the name only for political reasons. The Tu-22M3, though, is actually a development of the Tu-22M, most notably with different air intakes.
I use the names of chemical elements, but with two twists: I assign them in the order in which they appear in the song “The Elements” by Tom Lehrer, and I use the German names. So I have (or had), among others, Wasserstoff, Sauerstoff, Stickstoff, etc …
Same here. It just says “nginx has been successfully installed” or something like that. It serves the appropriate directories or redirects to the respective virtual machines for other (sub) domains.
What are the advantages of raid10 over zfs raidz2? It requires more disk space per usable space as soon as you have more than 4 disks, it doesn’t have zfs’s automatic checksum-based error correction, and is less resilient, in general, against multiple disk failures. In the worst case, two lost disks can mean the loss of the whole pack, whereas raidz2 can tolerate the loss of any 2 disks. Plus, with raid you still need an additional volume manager and filesystem.
ZFS raidz1 or raidz2 on NetBSD for mass storage on rotating disks, journaled FFS on RAID1 on SSD for system disks, as NetBSD cannot really boot from zfs (yet).
ZFS because it has superior safeguards against corruption, and flexible partitioning; FFS because it is what works.
To add, unlike “traditional” RAID, ZFS is also a volume manager and can have an arbitrary number of dynamic “partitions” sharing the same storage pool (literally called a “pool” in zfs). It also uses checksumming to determine if data has been corrupted. On redundant setups it will then quietly repair the corrupted parts with the redundant information while reading.