Posts

August 2024 update

I finally filled up the primary storage server with 1450 plots. My next task is to replace the couple of Pi4's that have a pair of external hard disks with the Qnap TR-004 (4 bay) storage units. To that end I have ordered a couple of Pi5 (8GB). I will start by swapping one at a time so that I have as may plots online at a time. Each of the Pi4's have around 275 plots so I should be able to double that. My concern is the TR-004 connected to a Pi5 will be slow as the Pi only has USB 3.0 (5 Gbit) ports even though the TR-004 supports USB 3.2 gen 2 (10 Gbit) connections. I guess I will find out. If it works out one could scale the solution to have multiple Pi5/TR-004 running as a harvester only requiring two power points and 1 network connection each.

May 2024 update

As mentioned in my previous post I replaced all the drives in my primary storage server with larger ones. I am still trying to fill it up. Unfortunately the Bladebit plotter seems to lock up regularly and when I notice I have to shut down Chia, delete the temporary files off the SSD and the half-done plot that it was working on when it locked up. Top shows a zombie task and the plotter seems to have lost track of it. The 2nd WD 16TB drive that I returned under warranty has been rejected. They claimed it has a dent on the back of the drive, so likely shipping damage when I got it. I had 12 of these drives, now 11, so no point wasting any more money on the WD drives. I currently have two Qnap TR-004 units that hold 4 drives each, so 8 drives are enough. I need to fill the primary storage server before working on the TR-004's.

April 2024 update

I bought 10 x 22TB Seagate HDD. Last week I bit the bullet and removed all the 16TB HDD from the primary storage server and put the 22TB drives in. I am now plotting away trying to fill it up. The Bladebit CPU plotter seems to lock up my full-node so I have to check on it once a day. The used 16TB drives are destined to go into a couple of QNAP TR004 units running in JBOD mode plugged into a Raspberry Pi5 running as a harvester. The TR004 can hold 4 drives. If this setup works out I will look at replacing the two Pi4 + external HDD's with Pi5 + TR004 units. I might have 3 of these as it seems a fairly inexpensive way of adding another harvester with 64TB of disk space. Speaking of the 16TB drives, I returned a 2nd WD 16TB HDD and picked up a replacement under warranty for the previous drive that I returned. It had been sitting at the computer shop for almost 3 months before I drove there to pickup the replacement. Thats 2 out of 14 WD drives that have failed so far. The lack of rel

End of February 2024 update

The TR-004 units arrived. The plan is to put four of the 16TB WD drives into each of the TR-004's. I am experimenting with running the TR-004's off a Pi5, but that may turn out to be too slow. I have a problem in that I have 10 x WD 16TB drives in my primary storage server and I had 4 spares. Two of those spares are now in one of TR-004's. One is off getting replaced under warranty and another spare seems to have died as well. I don't want to pull the drives from the primary storage server until I can generate more plots. Its currently too hot for that at the moment and you need to use an Nvidia GPU if you want compressed plots. I got 10 x Seagate 20TB drives to put into the primary storage server which will increase its capacity and free up those WD 16TB drives but the weather is holding me back.

End of 2023 update

Farming continues. All disks are full. Currently I have 200TiB of disk space occupied. I have ordered a pair of QNAP TR-004 "NAS expanders". These are 4 bay expansion boxes that connect via USB. They can do RAID or be a JBOD, controlled via dip switches on the back. My idea is to try the Raspberry Pi's running them but I might just plug them into the secondary storage server. I may have wasted my money on these, but I will have to see how they work out. The 5 bay ORICO expansion box I have tends to overheat despite having a fan in the back. I have stopped using it now. The drive bay doors have most of the holes covered and the PCB at the back where the drives plug into the SATA connectors is solid so it blocks all air flow that the fan produces. It also has an annoying habit of putting the drives to sleep after 12 minutes. Lastly they were rather slow but that is probably because the USB connection is 5 Gbit and has to be shared between 5 drives. QNAP have a TL-D400S (4 b

Quick update Nov 2023

Chia 2.1.1 released. All disks are full of uncompressed plots. I need to get a Nvidia GPU with 8GB (or more) vRAM and replot to get some compressed plots. Farm currently consists of: Primary Storage server (Ryzen 5600X, 128GB RAM, 10 x 16TB HDD in raidz2). Has 1060 plots. Secondary storage server (Core i3-8100U, 64GB RAM, 7 x 8TB HDD in raidz1). Has 397 plots. Pi storage server #1 (Pi4, 4GB RAM, 2 x 16TB HDD in btrfs single mode). Has 290 plots. Pi storage server #2 (Pi4, 4GB RAM, 2 x 16TB HDD in btrfs single mode). Has 290 plots. That gives 2037 plots taking 201.6TiB. I also have a plotting machine that is a full node and mainly for plotting. It doesn't have a discreet GPU at the moment. I could add some more USB connected HDD to the Pi's fairly easily but have to organise another power board for all the power adapters. I have some 10TB Seagate Expansion drives and a USB 3 hub available.

All disks are full

Shortly after Chia 2 was released they had a Chia 2.0.1 release which corrects an issue with computing invalid compressed plots due to passing incorrect parameters to the Bladebit plotter. I have filled all the disks at this point. I might get some more Seagate Expansion drives to plug into the Pi4's but I would rather create compressed plots if I could. Unfortunately I don't have a graphics card with 8GB of memory. The ones I have only have 6GB of video memory and I don't want to buy a GPU just for Chia plotting. Hopefully they we have an updated plotter out soon that I can use for compressed plots, in the mean time I am simply farming. My main storage server is currently run via an nfs share so its latency is worse than even the Raspberry Pi4's running a harvester. The full node runs a local harvester instance and accesses the plots over a network share. I used this to initially get the plots across but haven't yet switched it to run as a harvester.