Press J to jump to the feed. Press question mark to learn the rest of the keyboard shortcuts
Trending today
161

PiKeeper - Keep your data fresh! A Pi-based NAS server with data and power redundancy.

161
Archived

PiKeeper - Keep your data fresh! A Pi-based NAS server with data and power redundancy.

PiKeeper - Keep your data fresh! A Pi-based NAS server with data and power redundancy

Background

In addition to the standard collection of music and movies, I have a large family photo collection that makes me nervous. A hard drive failure or ill-timed power outage could wipe out years of family memories. Sure, there are cloud storage solutions that can back everything up with nearly perfect redundancy, but they can be costly for large amounts of data. I needed a way to keep my data safe, for little cost. Google Photos came out in the middle of this project offering unlimited photo storage (up to a certain image size), but I was already invested by that point. :) Plus where's the fun in that!?

My two main concerns were data redundancy and power redundancy. I didn't want a simple hard drive failure or a power outage to be able to wipe me out. On paper, I came up with a full server solution; a great, powerful NAS server with tons of redundant storage space, ZFS, the works. The price, however, was way out of my budget.

After some thought, my middle ground compromise was the PiKeeper project! A Raspberry Pi NAS server using two 5TB external USB hard drives. Data is mirrored to the second, identical drive, providing protection from one of the drives failing, and a UPS protects the whole thing from power fluxuations and outages.

The PiKeeper offers an added benefit, even over a full-blown NAS... Portability! You could lose the Pi completely and all you would need to do is plug one of the drives into another computer to access your data. Try that with a RAID array. :)


Parts

The actual parts I used aren't really important, but here they are anyway:

  • 1x Raspberry Pi 2

  • 2x Samsung D3 Station 5TB USB 3.0 External Hard Drives

  • 1x APC BR1000G Back-UPS Pro 1000VA UPS

(A Pi B+ would probably scrape by as well, but I had a Pi2 handy for the project.)

Really, all that's important is that you have two hard drives of the same size, and a UPS with the ability to communicate outage info to the Pi.

Speed

Eagle-eyed viewers may have noticed that the hard drives I used are USB 3.0. The fact that the Pi and Pi 2 are limited to USB 2.0 and 100mbit Ethernet really isn't that much of a problem in this usage scenario. Files still transfer over the internal network at about 10 megabytes per second, more than enough to stream my movies let alone move around photos that are a few megabytes each. The only time I had to worry about moving a ton of data was when I first transferred all my data to the completed server. I just let it run overnight and never worried about it again. If you absolutely need more speed, this tutorial would work equally well on any Debian-based Linux platform. You could use a desktop PC, a laptop, an Odroid XU4, or just about anything else that supports USB 3 and gigabit Ethernet. I chose the Pi 2 because of its low power requirements (and thus less draw on the UPS), and the fact that it was fast enough for my needs.


Hardware setup

Setting up the hardware is simple enough... Plug your UPS into the wall, and connect all the power plugs (The Pi and both hard drive power plugs) into the UPS. Since the power draw is so low I used a surge strip to keep everything neat (Everything into the strip, strip into the UPS). Connect the hard drive USB cables to the Pi, and don't forget to connect your UPS's USB cable into the Pi as well.

It's also a good idea to plug your home router and modem into the UPS. It adds very little extra drain and you then have the added bonus of having internet access in the event of a power outage! With everything plugged in and running full blast, the UPS reported that I was using about 2% of its rated capacity... I could keep the NAS and internet going for over 4 hours and not skip a beat. And if the outage lasts longer than that, the UPS will notify the Pi to gracefully shut down, preventing any data loss. We'll get to that later in the tutorial.


Getting Started

Now for the meat and potatoes!

This tutorial assumes you already have your Pi running Raspbian in a freshly installed state. There are other great tutorials if you need assistance with that. It also assumes you have a basic working knowledge of Linux, such as knowing how to navigate the file system and are familiar with editing configuration files from the command line (nano is my text based editor of choice, with apologies to the vi and emacs people!). I link to other tutorials for steps that aren't directly related to the project, and don't always fully explain what every configuration option that we're using does. Remember, if you're not completely clear on something, Google is your friend! I'll also try to update the post based on questions I get in the comments.

First thing first

You'll want to set up the Pi to boot to the command line, as we don't need the desktop environment. After initial setup, everything can be done via SSH (and you'll probably have the PiKeeper tucked away in a corner anyway). I don't like removing the desktop environment however; who knows when I may need it one day, and saving a few hundred megs on a multi-gigabyte SD card isn't worth the hassle.

Let's start by updating our local repository information and getting the newest versions of all our software:

sudo apt-get update
sudo apt-get upgrade

We can also change the hostname of our Pi to something more descriptive. I named mine 'nas'. To simplify things for this basic step, follow this great tutorial to change the Pi's hostname: How to Change Your Raspberry Pi's Hostname.

This may seem like an unnecessary step, but it's actually quite helpful. Raspbian runs avahi-daemon by default, allowing you to connect to your Pi via its hostname instead of having to find and remember its IP. Now when you're on the local network, you can simply connect to nas.local and you're connected! You could assign a static IP to your Pi instead, but this method allows the Pi to be more portable... Whatever network you're on, with whatever IP it's using, it's always accessible by connecting to nas.local! You could keep the default name, but I use multiple Pis on my network and like to be able to differentiate between them.

Setting up mail

The PiKeeper needs a way to keep you notified, especially when a drive starts to fail. There are different ways to get notifications but the most basic method is to simply have it email you (I also like to get an email every morning summarizing what was newly backed up the previous day, which we'll cover later). Here we'll set up the Pi so it can send emails via your Gmail account. Of course you can use any email provider, but the examples here will assume you're using Gmail.

Let's install the prerequisite packages for email functionality:

sudo apt-get install ssmtp mailutils mpack

Then let's edit the ssmtp config file at /etc/ssmtp/ssmtp.conf and add our mail server info:

sudo nano /etc/ssmtp/ssmtp.conf

Add or change the lines:

root=your_email@gmail.com
mailhub=smtp.gmail.com:587
rewriteDomain=gmail.com
hostname=your_email@gmail.com
AuthUser=your_email@gmail.com
AuthPass=your_app_password
useSTARTTLS=YES

You can't use your regular gmail password in the password field; you need to create an app password from within Gmail, and add that to the AuthPass line. See here for instructions on how to do that.

Now your Pi should be able to send email! You can test it at the command line with something like this:

echo “This is a test” | mail -s "Hello world" your_email@gmail.com

You should receive an email with the subject "Hello world", and the message "This is a test".


Setting up the drives
NTFS support

NTFS?! Yup. Hear me out. One of the benefits of the PiKeeper is its portability. If I had to go with external drives and a Pi instead of the more expensive ZFS server setup I wanted to build, then I wanted everything to remain portable. All of this data is handled by a $35 piece of hardware, getting instructions from a $5 SD card... If the Pi or the SD card dies I can simply plug one of the drives into any PC and access everything immediately. There's nothing stopping you from using a Linux file system and LVM mirroring if that's what you want to do (see here for a tutorial), but the easiest way I could find to maintain portability was to use the NTFS file system, so that's what I chose to do for this setup.

The de-facto method for accessing NTFS on Linux is ntfs-3g. Raspbian offers a precompiled package in its repo so I could have simply installed it with apt-get and been done, but the version offered on the Raspbian repo is several years old. Since then there have been several updates that affect data integrity. To be safe, I wanted to make sure I was using the latest version available so I compiled it from source. Fortunately it's quite simple!

Head to the ntfs-3g home page and make sure you're downloading the newest version. As of this writing, the newest version is dated March 14, 2015, so my commands will reference that version.

Let's download the source to our home directory and extract it:

cd ~
wget https://tuxera.com/opensource/ntfs-3g_ntfsprogs-2015.3.14.tgz
tar -xzvf ntfs-3g_ntfsprogs-2015.3.14.tgz

Extracting the compressed file automatically creates a folder with the same name, with everything extracted inside. Now we go into the folder and configure, then compile:

cd ntfs-3g_ntfsprogs-2015.3.14.tgz
./configure
make -j 4
sudo make install

If you're new to compiling software, see here for details on what a configure script does, and here for some info on make.

The -j 4 option uses the 4 cores of the Pi 2 to help speed up compile. The J option is a source of much contention and I'm sure I'm not using it to its fullest potential, but oh well. :) Leave it out if you're on a single-core Pi B+.

Identifying and mounting the drives

Now that we can read and write to NTFS, let's set up the drives. We'll list all the drives that are currently connected:

sudo blkid

blkid will return something similar to this:

/dev/mmcblk0p1: SEC_TYPE="msdos" LABEL="boot" UUID="15CD-3B79" TYPE="vfat"
/dev/mmcblk0p2: UUID="13d368bf-6dbf-4751-8ba1-88bed06bef77" TYPE="ext4"
/dev/sda1: LABEL="SAMSUNG" UUID="887E774D7E773354" TYPE="ntfs"
/dev/sdb1: LABEL="SAMSUNG" UUID="9C84271B8426F782" TYPE="ntfs"

The first two lines are the SD card, the first being the boot partition, the second being the Linux partition. The two lines under that are the USB drives. We need to mount those drives to the file system somewhere before we can access them.

Let's create some mount points for the drives. These are the directories that the drives will attach to, and how we'll access the data. For a NAS, I like to mount these in the /media directory (we're storing a bunch of media, after all!).

sudo mkdir /media/storage
sudo mkdir /media/backup

Now we could mount the drives the traditional way, by their device mappings (/dev/sda1 and /dev/sdb1 in the above example). But a drive's mappings can change for a number of reasons... depending on which USB port they're plugged into, which order they happen to get detected in at boot time, etc. We want to be more definitive than that since we have a specific use for a specific drive, so we're going to mount them by their UUID. Every single hard drive out there has its own unique UUID number, so mounting by the UUID ensures that we're mounting the drive we want, where we want. We'll never accidentally mount the storage drive to the backup directory, or vice versa. The UUIDs above are for my drives; of course they'll be different for yours.

Edit the /etc/fstab file and add a line for each of your drives to the bottom of the file (remember to use the UUIDs you obtained for your drives from the blkid command:

UUID=your-uuid-here /media/storage ntfs-3g rw,defaults 0 0
UUID=your-uuid-here /media/backup ntfs-3g rw,defaults 0 0

I also like to label the drives, so I can easily tell which is which should I ever have to plug them into a Windows machine:

sudo ntfslabel /dev/disk/by-uuid/your-uuid-here STORAGE
sudo ntfslabel /dev/disk/by-uuid/your-uuid-here BACKUP

Make sure you're labeling them the same way as you're mounting them, so that the drive mounted as 'storage' is labeled 'STORAGE', etc.

OPTIONAL: As my drives were brand new and pre-formatted for NTFS, I didn't need to format them. If you're using drives that are formatted differently (Linux ext-4 or something else) or you just want to wipe them, you'll need to format the drives at this point:

mkfs.ntfs -Q -L STORAGE /dev/disk/by-uuid/your-uuid-here
mkfs.ntfs -Q -L BACKUP /dev/disk/by-uuid/your-uuid-here

After a reboot, the drives will now be accessible at /media/storage and /media/backup.

NOTE: If you're using a Pi 2, you need one more step to get the drives detected at boot time, due to a bug with the way the Pi 2 currently parallelises its boot process. Edit your /boot/cmdline.txt file and add rootdelay=10 to the end of the line in that file. This is resolved in the Raspbian Jesse release; once that's live, you won't need to do this.

Monitoring S.M.A.R.T. status

All modern hard drives come with a basic self-checking functionality called S.M.A.R.T. We need to be able to read and interpret this data to know if (when) a drive starts to fail.

The package that handles this is called smartmontools. Once again the version in the Raspbian repository is quite dated, but I didn't see anything earth shattering between the old and new versions, and chose to install the standard package:

sudo apt-get install smartmontools

We'll be using these tools to query the drive's health status and occasionally perform self-tests.

The package includes a daemon which will automatically monitor the drives in the background. To enable it, we need to edit the /etc/default/smartmontools file and uncomment a line to enable the daemon:

sudo nano /etc/default/smartmontools

The line which enables the daemon is commented out:

#start_smartd=yes

We need to remove the # at the beginning of the line to enable it, so it looks like this:

start_smartd=yes
Configuring smartd

The daemon will now run once we reboot, but first we need to tell it which drives to monitor, and in what way. To do that we edit the /etc/smartd.conf file. This file is jam packed with tons of great examples of different configurations for different systems, but it's just a bunch of clutter for our needs. Let's make a backup of the file for safekeeping and reference, and make a new one for our use:

sudo cp /etc/smartd.conf /etc/smartd.conf.bak
sudo rm /etc/smartd.conf
sudo nano /etc/smartd.conf

Yes, we just deleted a file and then we're asking to edit it... Don't worry, nano will create the file from scratch if you try to open something that doesn't exist.

We only need two lines, one for each drive (plus a comment line to remind us what's going on). These lines are quite long.... In case they wrap on the screen here, remember they're just two very long lines:

#Scan of storage and backup drives.  Short scan daily at 4am, long scan on Tuesdays at 1am.
/dev/disk/by-uuid/your_uuid_here -a -I 190 -I 194 -d sat -d removable -o on -S on -n standby,48 -s (S/../.././04|L/../../2/01) -m your_email@gmail.com -M exec /usr/share/smartmontools/smartd-runner
/dev/disk/by-uuid/your_uuid_here -a -I 190 -I 194 -d sat -d removable -o on -S on -n standby,48 -s (S/../.././04|L/../../2/01) -m your_email@gmail.com -M exec /usr/share/smartmontools/smartd-runner

Phew!

The two lines are identical except for the drive's UUIDs. Let's walk through the individual directives and see what we're doing here, just in case you need to modify things for your specific setup:

  • -a Turns on all the default directives: Check the drive, monitor its health, report failures, track changes, etc.

  • The two -I directives are to ignore the two temperature change indicators. Without these, you would get an email every time the drive temperatures go up or down a single degree. Very annoying.

  • -d sat tells smartd that we're trying to read a SATA drive, which your drives most likely are in this project (internally, anyway).

  • -d removable tells smartd that the drive is meant to be removable, and to not freak out if it turns up missing one day.

  • -o on Turns on automatic offline testing if it's available

  • -S on Turns on attribute saving, so the drive remembers its S.M.A.R.T. data between power cycles

  • The bulky -s (S/../.././04|L/../../2/01) directive is what sets the scan frequency. It's a set of regular expressions that match the time interval that you want to scan the drives at. Here, we're doing a short drive test every morning at 4am, and a long surface scan every Tuesday at 1am.

  • The -m directive lists which email address to send error reports to

  • The -M exec directive lets you run a custom script when an error is detected. By default, it's running the built-in /usr/share/smartmontools/smartd-runner script, which itself executes any script found in the /etc/smartmontools/run.d directory. This is a neat little trick that lets you simply drop any number of scripts into that directory, and they'll be run every time there's a disk error. We'll be dropping a script of our own into that directory later in the tutorial.

  • I saved -n standby,48 for the end... This is what keeps your drives from dying an early death. smartd normally checks its drives every 30 minutes, meaning that if the drives are not spinning (and ours will not be, most of the time), it will spin up the drives to check on them every 30 minutes. 24 hours a day, 7 days a week. Definitely not good for the life of your drives! This directive tells smartd to skip its check if the drive is not spinning, but to go ahead and spin up the drive after skipping 48 checks, which at 30 minutes a check comes out to 24 hours. I figure one check per day when I'm not using the system is a good balance.

Now... Not all external drives perfectly adhere to the ATA hard drive standards. Some USB drives don't correctly report their sleep status, among other things. If you find your drives are still spinning up every 30 minutes even with this directive enabled, you need to manually change smartd's polling interval to prevent premature drive wear.

In that case, you would need to edit the /etc/default/smartmontools file again and uncomment the following line:

#smartd_opts="--interval=1800"

Remove the # again and change the 1800. The number is in seconds (1800 seconds is 30 minutes). You'll want to change it to something like 43200, which is 12 hours. That way you have some leeway so you're not missing daily short tests by accidentally polling the disk too late, after the short test would have fired off.

You can now either start the service with sudo service smartd restart, or you can reboot.


Syncing data

We now have two working NTFS hard drives with their health being monitored on a daily basis. Next we'll set up our mirroring system so that any data we add to our primary drive is automatically backed up to the second drive.

For this, we'll use rsync. It's simple enough to install:

sudo apt-get install rsync

rsync is a great tool that only copies new or changed data. If you added just a few small files to a huge directory, it only copies over the small files. If you made a small change to a multi-gigabyte file, it only copies the small change and doesn't need to recopy the whole file. Perfect for our use! We don't want to burden the little Pi with unnecessary writes.

We'll be using rsync to copy any data that appears on the storage drive over to the backup drive on a daily basis. For that, we'll use a custom script that automatically runs each morning via cron.

(Rather than try to paste the scripts here (word wrapping makes them look horrible), I've created a Gist site on Github for them. Click here to get to them. There's even an option to download them in a .zip file instead of having to copy and paste them.) I don't claim that these are neatly-coded in any way, but they work for me!

Take a look at the scripts and make sure the configuration settings work for you (if you're on a Pi, you shouldn't really need to change anything).

Here's a quick summary of how the scripts work:

  • There are two scripts that work together, aptly named syncscript and disable_rsync_script.

  • syncscript runs each day as a cron job and executes rsync, copying any new data over to the backup drive.

  • disable_rsync_script is run by smartd only if a disk error is discovered, and it creates a file that syncscript looks for before performing a sync.

  • If the file exists, there must have been a disk error, so the sync is halted and an email is sent out. You don't want to be reading from or writing to bad drives!

Let's take the two scripts, and put them where they need to go:

  • syncscript goes in the /etc/cron.daily/ directory. Any script in this directory is automatically run each morning (around 6:30am depending on a few factors).

  • disable_rsync_script goes in the /etc/smartmontools/run.d directory. This is the script we're dropping in to get smartd to run if a disk error is ever found.

When you've copied the scripts to their respective directories, make sure you make them executable:

sudo chmod +x /etc/cron.daily/syncscript
sudo chmod +x /etc/smartmontools/run.d/disable_rsync_script

(Once again, you shouldn't have any trouble but double check to make sure the scripts work for you. For example I also run a torrent client on my Pi that saves directly to the storage drive, so the script is set to not backup the specific directory where I have torrents being actively saved to (no sense in backing up an incomplete file). As I tried to make it as plug-and-play as possible for Pi users, it won't hurt anything to leave that in, but just remember that you can configure that kind of stuff to your needs.)

By default, rsync sends an email to root every time it runs. Since we forwarded root emails to our own email address back when we were setting up the mail system, we'll now get a nice little summary every morning of what was backed up the night before!


Configuring the UPS

Our UPS is already keeping the power on in the event of an outage, but let's get it talking to the Pi.

We're going to install another daemon; this one is called apcupsd, and it talks with the UPS, keeping tabs on the power situation:

sudo apt-get install apcupsd

Just like with smartd, we need to edit the configuration file and modify another file to get the daemon running at startup. Fortunately this one isn't as complicated!

We'll edit the configuration file at /etc/apcupsd/apcupsd.conf:

sudo nano /etc/apcupsd/apcupsd.conf

Here are the things we need to change:

  • Find the UPSNAME line and give it a name. I call mine PiKeeper.

  • Find the UPSCABLE line and make sure it's set to usb

  • Same with the UPSTYPE line. Set that to usb as well.

  • Find the DEVICE line and make sure it's blank. You just want to see: DEVICE.

So you want these 4 lines to look like this:

UPSNAME PiKeeper
UPSCABLE usb
UPSTYPE usb
DEVICE

There are a lot of other configuration options that you can tweak, but other than these changes, the default settings should work well enough for us.

We also need to edit the /etc/default/apcupsd file to let the daemon start at boot time:

sudo nano /etc/default/apcupsd

Find the ISCONFIGURE line, and change ISCONFIGURE=no to ISCONFIGURE=yes

You can now either start the service with sudo service apcupsd restart, or you can reboot.

Let's see if we can talk to the UPS:

apcaccess status

If all went well, you should now see the status of your UPS, such as the battery charge, line voltage, etc. Voila!

You'll now get error logs whenever the power goes out, and the Pi will automatically shut down in an extended outage when the UPS battery gets low. You can also do all kinds of fancy stuff like sending an email or text alert when the power goes out, but since outages are somewhat common for me, I find those a bit annoying and prefer manually going through the logs. Take a look here for a great tutorial that digs a bit deeper into those features.


Setting up samba

The skeleton of the PiKeeper is in place! Now we just need a way to access the files over our network from other computers. For this, we use samba. Samba is what will allow your other devices on your network to access the files on your NAS. When properly configured, the PiKeeper will appear on your home PC as just another drive! You can drag and drop, or do anything with the files on the PiKeeper as if they were right there on your computer. Only everything is backed up and safe!

Let's install Samba support along with some supporting libraries that we need:

sudo apt-get install samba samba-common-bin

As before, we now need to edit the configuration file at /etc/samba/smb.conf to suit our needs. It's another cluttered config file so let's backup the original file and make our own fresh one again:

sudo cp /etc/samba/smb.conf /etc/samba/smb.conf.bak
sudo rm /etc/samba/smb.conf 
sudo nano /etc/samba/smb.conf 

Here's what we want in the file:

[global]
    workgroup = WORKGROUP
    server string = Pi NAS server
    security = USER
    map to guest = Bad User
    syslog = 0
    log file = /var/log/samba/samba.log
    log level = 2
    max log size = 1000
    dns proxy = No
    usershare allow guests = Yes
    panic action = /usr/share/samba/panic-action %d
    idmap config * : backend = tdb
    netbios name = Storage
    load printers = No
    printing = bsd
    printcap name = /dev/null
    disable spoolss = Yes

[Storage]
    comment = Storage
    path = /media/storage
    force user = pi
    read only = No
    guest ok = Yes

Make sure the WORKGROUP entry matches your Windows workgroup name, but if you've never changed it, you should be fine.

You can now either start the service with sudo service samba restart, or you can reboot.

This will give us a non password protected share that anyone on the local network can access, named 'Storage'. As you can see in the config, what you're really accessing is /media/storage on the PiKeeper. You don't access the backup drive directly, that's all handled behind the scenes with our scripts!

You should now see a 'Storage' share on your network, which you can connect to or map a drive directly to your computer. You shouldn't need any usernames or password unless you're connecting from a machine that's on a different workgroup or domain (like a work computer that's on your home network for example). In that case, give it any random username and no password.

There are a ton of options that you can set, like password protection, per-user drive quotas, etc. See the manual for setting that up.


Tada!!

You now have a power-protected, redundant, network-connected data storage powerhouse! These are the bare bones, but there are all kinds of other things you can add. For example my PiKeeper also runs a torrent box, the Monitorix logging platform which gives you a fantastic report on system status at a glance, the SickRage video file manager, and Pydio, which turns the PiKeeper into a personal cloud storage platform, allowing you to access your files from anywhere in the world. There's no limit to what you can do!

If something isn't clear or I'm missing any steps, please let me know in the comments and I'll make any necessary changes. Thanks for reading!

41 comments
97% Upvoted
User avataru/monday_com·promoted

* not using fake statistics on reddit. But seriously now, over 100,000 organizations worldwide rely on monday.com, a visual project management software for teams. Track progress, timelines, and budgets and make key decisions based on real-time data rather than guesses. Click to learn more!

This thread is archived
New comments cannot be posted and votes cannot be cast
Sort by
level 1

Well, that is quite an extensive guide. Been considering a NAS, will do a more thorough read through later.

Just one thing I need to comment on though:

Sure, there are cloud storage solutions that can back everything up with nearly perfect redundancy, but they can be costly for large amounts of data.

There are many backup services that offer unlimited space for around $6/month. That's for one computer, sure, but that's a very reasonable price. Then of course there are all the benefits of having an offsite backup.

level 2
Original Poster2 points · 5 years ago

Definitely a good thing to have an offsite backup in addition to redundant drives. Multiple drives in the same place won't protect you from a fire or flood.

That being said I didn't know offsite storage had gotten that cheap! A recurring $72/year is still a little steep for my needs but it's far better than I thought. Thanks!

level 2

Yeah they have unlimited space, but not unlimited bandwidth. Let me know how uploading hundreds of gigs monthly goes for you.

level 1

I had been looking for options for a NAS, and there you are solving my problem. That's a very clean solution.

level 1

I don't intend to do this, but upvote for an excellent quality post! I'm so used to seeing shitpost blog slam disguised as a self-post, this is really nice to see.

level 1

Wow, that is detailed and very well thought out.

The only thing I would do is add the backup drive to your samba configuration but set "read only = Yes". Since your backup is one day old, if you accidentally delete a file, you can easily recover the previous day's backup. Since the mount is read only, you don't need to worry about messing it up.

level 2
Original Poster2 points · 5 years ago

That's actually a great idea! If i accidentally deleted something, I was planning on accessing the backup drive via SSH, but that's a better solution. Thanks!

level 1
RPi32 points · 5 years ago

Really detailed tutorial. Planning on doing the same. But I wanted to ask you, could I access samba from my Mac?

level 2
Original Poster3 points · 5 years ago · edited 5 years ago

You can! By default, NTFS is read-only on Mac, but there are ways around that.

level 1
Comment deleted by user5 years ago
level 2

Agreed. Why not both? Is there a problem running both Samba and NFS at the same time?
I am also thinking of having a /media subdirectory, and running miniDLNA from that directory alone.

level 1
[deleted]
2 points · 5 years ago

Some things about rsync.

When used locally, as in your example, it will copy always the whole file, even if it's 3gb.

As the man page states:

-W, --whole-file
   With this option rsync’s delta-transfer algorithm is not used and the whole file is sent as-is instead.  
   The transfer may be faster if this option is used when the bandwidth between the source  and  destination
   machines is higher than the bandwidth to disk (especially when the "disk" is actually a networked filesystem).
   This is the default when both the source and destination are specified as local paths, but only if no 
   batch-writing option is in effect.

So, in localhost, delta-transfer is not used by default.

To enable it for local use add this option:

--no-whole-file

Also, you're making an exact copy of the files. This does not prevent corruption. If you corrupt a file, at the end of the day it will be corrupted at your backup location too.

A way to fix this is to backup files with the "--backup" option. Also check the "--backup-dir" to separate backed up files from original files.

This allows you to have all the versions of your modified files in a separated directory.

level 1

A bit of caution with identical drive uuids, once I bought 2 identical 2tb USB hds.

They were so identical they had the same uuids. This caused system issues, where I plugged in both drives and only one would be seen.

I recall using tune2fs to alter the uuids on the drives.

Just a heads up on what could be a rare but confusing potential problem.

level 1
2 points · 4 years ago

Dark - I'm not sure whether you are still answering questions, but I am hopeful. I'm using your smb.conf parameters to help solve some problems.

I have a couple of questions if I may.

  1. In the [Global] section, you have the following "security = USER" - can you explain this or direct me to a location where I can read more about this?

  2. Also, you have "netbios name = Storage" where does one determine their nebios name?

  3. "user share allow guests = yes" what does this do?

Thanks, Jon

level 2
Original Poster2 points · 4 years ago · edited 4 years ago

Yep I'm still here!

I wish I had documented all my work on the smb.conf file, it's a short blurb in the tutorial but I probably spent more time getting that to work than anything else. :)

 

In the [Global] section, you have the following "security = USER" - can you explain this or direct me to a location where I can read more about this?

I found through trial and error that using security = USER as opposed to security = SHARE gave me more consistent results; I couldn't always connect to the share when it was set to SHARE, and I could never figure out why.

Instead, I set security to USER (which is now default in new versions of Samba) and used map to guest = Bad User which maps all usernames that don't exist to a guest account. Since NO usernames exist in the config file, everyone is a guest.

The Samba configuration manual is found here, I spent hours studying that thing to get everything working. :)

 

"user share allow guests = yes" what does this do?

usershare allow guests = Yes enables guest access which allows the above to work.

 

Also, you have "netbios name = Storage" where does one determine their nebios name?

This line actually sets the NetBIOS name of the Samba share to whatever you want. Think of NetBIOS like the Microsoft version of the Avahi daemon, which in Linux lets you map an IP to a DNS name. I actually find myself using the NetBIOS name more than the Avahi name in practice. With netbios name = storage, you can just type 'storage' in your web browser, command line, etc, and you'll get the NAS. ping storage works on the same network, for example. :)

level 1
[deleted]
1 point · 5 years ago

You trust a pi to write, over USB, your filesystem? That is scary! Hope nobody takes a flash photo nearby ;)

level 2
Original Poster3 points · 5 years ago

Honestly it is a bit scary! Which is why there's a redundant hard drive should one fail, and those drives are designed to be plugged into a windows system at a moment's notice should the Pi fail. The whole thing is designed with failure in mind.

level 1
[deleted]
1 point · 5 years ago

NTFS? Going with a FUSE filesystem? C'mon, you should be ashamed.

Also, you could cut out the syncing if you went with mdraid. I guess it's where you want your point of failure though. mdraid will keep a copy of your data available at all times (and do monthly read checks on the drives) but won't save you from stupidly deleting your files.

level 2
Original Poster3 points · 5 years ago · edited 5 years ago

I expected heat for that part!

If I had to go with external drives and a Pi instead of the ZFS server setup I wanted to use, then I wanted everything to remain portable. If the Pi or the SD card dies I can simply plug one of the drives into any PC and access everything immediately. (That reminds me, I should be backing up the SD card file system too!)

Or: If zombies are attacking and I need to run, I wanted to be able to grab a drive off the shelf and plug it in at the library where my crack squad of zombie hunters are grouping up to go over the plans that are stored on the NAS. ;)

I'm actually going to go back and add a little more justification for NTFS, with a note that LVM mirroring would work too.

level 2

I was confused as well, why not use a linux filesystem and use lvm to mirror the drives? Then share them over samba as CIFS shares, your Windows boxes won't care but I think you would get better performance out of linux disk tools instead of relying on ntfs3g.

level 2

I did a similar setup and the overhead for NTFS was horrible so I reformatted all my drives to ext4. CPU utilization went from around 50% to practically zero. Who cared about portability of the drives when its just as easy to pop up another linux box to read them. If you’re schlepping drives around 5TB are not the way to go for portability.

What's the overhead of running mdraid? or lvm?

level 2
2 points · 5 years ago · edited 5 years ago

Actually it's very sensible, sorry. Let me describe the ways:

  • Compatibility: NTFS is compatible with nearly every mainstream PC in the world. Even Macs can read it natively. No extra software needed. Additionally in worst case scenario restore operations, NTFS file recovery tools are way more prevalent off the shelf. Considering it supports 4GB+ size files, there are really no drawbacks.

  • RAID IS NOT A BACKUP! What if op accidentally deletes a file? At least now OP can go to the second disk and restore it if the rsync hasn't taken place.

Also: Although statistically unlikely, if OP buys disks from the same manufacturer, they will have the same MTTF.. Why risk having both disks fail at the same time? It runs hotter to boot.

OP - well done

More posts from the raspberry_pi community
5.2k
Post image
5.2k
160 comments
491
promotedPosted by17 days ago
Comments are locked
View Comments
Play
0:00
0:11
Settings
Fullscreen
491
0 comments
3.2k
Post image
3.2k
242 comments
3.1k
Post image
3.1k
136 comments
2.4k
1/4
2.4k
132 comments
2.1k
  • see full image
1/2
2.1k
98 comments
1.2k
1/2
1.2k
31 comments
1.1k
1/4
1.1k
24 comments
1.1k
Post image
1.1k
61 comments
789
789
45 comments
555
555
16 comments
102
Post image
102
10 comments
94
Post image
94
11 comments
93
Post image
93
7 comments
70
70
13 comments
63
  • see full image
  • see full image
1/3
63
8 comments
51
1/20
51
9 comments
37
Post image
37
12 comments
31
31
5 comments
28
1/2
28
6 comments
27
Post image
27
35 comments
26
26
12 comments
23

I have been waiting to configure my RPi4b with USB boot, but trying to avoid using terminal commands and changing/manipulating files. I'm not that technical tbh.

I just found that this can now be done quite easily using the 'raspi-config' tool, so I had a look online for instructions. I found a lot of different, conflicting methods. Perhaps some/all of the step by step instructions may now be replaced by the 'raspi-config' tool ? But not sure which.

So these are the steps I am planning to do...

boot into RPiOS on SD card

$ sudo apt update

$ sudo apt upgrade

$ sudo raspi-config

GUI tool is displayed...

(N) 1 Boot Options > B5 Boot ROM Version > E1 Use the latest version of boot ROM software

reboot

$ sudo raspi-config

(N) 1 Boot Options > B4 Boot Order > B1 USB Boot

shutdown

remove SD card

Now should be able to boot into SSD that has RPiOS image installed

Q1. do I still need to check/change version of '/etc/default/rpi-eeprom-update' to 'stable' ?

Q2. do I still need to replace all the .dat and .elf files in the boot partition ?

Q3. do I still need to modify the cmdline.txt to use the correct partition ?

Q4. do I still need to run command in terminal to update to latest boot ROM version ? i.e....

$ sudo rpi-eeprom-update -d -f /lib/firmware/raspberrypi/bootloader/stable/pieeprom-2020-06-15.bin

EDIT2: Q5. do I still need to run command in terminal '$ sudo rpi-update' ?

... or are these steps now not necessary when using the 'raspi-config' tool ? I'm not sure tbh. If some of the above steps are still necessary, in what order should the be done?

any advice / guidance would be much appreciated

EDIT: formatting & additional questions for clarity

23
12 comments
22

We built a Raspberry Pi based monitoring and controlling system for soilless gardening (hydroponics / aquaponics / aeroponics) and we call it the AquaShield.... as it is actually a huge shield for a Raspberry Pi. We use Raspberry Pi 4s as the brain and it works flawlessly. Please check out our project and tell me your honest opinion: http://aquashieldcontrol.com/

Also we are working on the next version of AquaShield and planning to launch a campaign soon, so if you are interested in high-tech farming, consider to subscribe to our pre-launch list here: https://www.aquashieldcontrol.com/smart-home-system-for-hydroponics-and-aquaponics/

Here are some picture about the system also:

Some photo shoots for our previous Kickstarter campagin - The AquaShield without any enclosure


With enclosure


AquaShield is doing actual work - Controlling and monitoring an indoor aquaponics setup


Controlling and monitoring a small hydroponics grow box (for showing purposes)
22
10 comments
Continue browsing in r/raspberry_pi
A subreddit for discussing the Raspberry Pi ARM computer and all things related to it.
2.1m

Members

580

Online


Created Oct 10, 2011
Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.