Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Windows7ge

The Pure Solid State Server Build Log

Recommended Posts

Posted · Original PosterOP
6 hours ago, leadeater said:

Need to use Server 2019 if you want ReFS and dedup

Eh, most of my files that take up the most space are .mp4 which I don't think can or will dedup very well. I'd rather play with ReFS if that's the case.

 

6 hours ago, leadeater said:

File copies ram disk to network ram disk for me I get around 1.3GB/s to 1.67GB/s. To my SSD array I get around 800MB/s to 1GB/s, explorer isn't that great at file copies and stress tests though, not if you want to get the maximum possible results. For that you can use IOmeter which you can load up a bunch of parallel high queue depth I/O to the share.

Testing parallel processing capabilities isn't worth it when I'm the only client. I'll just be looking up numbers for the sake of the numbers. Plus I don't know how to use I/Ometer.

 

I remember when LMG was experimenting with 100Gbit Infiniband they were able to push 4GB/s over file explorer so I don't think that's my bottleneck. Not unless file explorer has some sort of direct influence on the storage media being used (besides sending the data in that direction). For the sake of testing though do you know of another protocol or program for network file transfers besides file explorer that may be less restrictive if that's the case?

Link to post
Share on other sites
5 hours ago, Windows7ge said:

I remember when LMG was experimenting with 100Gbit Infiniband they were able to push 4GB/s over file explorer so I don't think that's my bottleneck.

Infiniband has way lower latency and RDMA support, you need RDMA to get the really high throughput. The X540's don't have it.

Link to post
Share on other sites
Posted · Original PosterOP
52 minutes ago, leadeater said:

Infiniband has way lower latency and RDMA support, you need RDMA to get the really high throughput. The X540's don't have it.

I had looked it up earlier and found out on my own that the X540 doesn't support RDMA. I'm just saying I don't think File Explorer is my bottleneck that's all.

 

I tried doing a local file transfer and hit 1GB/s writing to the pool. I think with some tweaking I could get close to this over the network but it seems without some type of faster cache or expanding the pool this will be the best I can expect for the time being.

Link to post
Share on other sites
Posted · Original PosterOP
3 minutes ago, leadeater said:

Sooo..... what did you change? 😀

I enabled Jumbo Frames on the network switch and set the MTU of all four NICs to 9014. It seems to really help when the CPU clock isn't high enough to pack those 1500MTU packets at 10Gbit fast enough to saturate the link. Now if the X540 supported RDMA I could probably keep it at 1500 but the CPU needs a little help.

Link to post
Share on other sites
Posted · Original PosterOP

I need to ask and leadeater you'll probably know more about this specifically than others. The next step we're on is the need to setup software to backup the system. I plan to have a offsite backup solution. The remote server will run FreeNAS and it supports SSH/SFTP. I need some form of replication software for Windows Server (if it doesn't have it to begin with) that can automatically replicate the data on a schedule and send the copies to the FreeNAS box over the internet via SSH/SFTP.

 

Anybody know where I could find such software?

Link to post
Share on other sites

Have you considered replacing the 40mm fans?

(maybe you did mention it or did it but didn't see it in the 4 pages)

Based on the first page pictures it looks like you could easily remove groups of 4 x 40mm fans and replace them with 80mm fans, or potentially 92mm fans.

Seems like you have exactly 5 groups x 4 40mm fans .. so quite easy to put 5 fans there.

You would get more airflow for less noise.

Also, if those small fans push fans towards the outside of the case, how's the airflow with the two fans on the sides pushing air out...

 

You could go nuts and use 38mm thick Sanyo Denki fans along with a fan controller: https://www.digikey.com/product-detail/en/sanyo-denki-america-inc/9GA0812P1H611/1688-2174-ND/8285297

 

You'll need a fan controller to adjust rpm using pwm otherwise the server will sound like airplane with fans running at 8000rpm , but you will move air around.

 

or you could go with thin 15mm thick fans like these if space is an issue : https://www.digikey.com/product-detail/en/delta-electronics/EFC0812DB-F00/603-1159-ND/1850528

 

Here's loads of fans, filtered by 12v and 80x80 mm : https://www.digikey.com/short/ppq058

 

Link to post
Share on other sites
4 hours ago, Windows7ge said:

The next step we're on is the need to setup software to backup the system. I plan to have a offsite backup solution. The remote server will run FreeNAS and it supports SSH/SFTP. I need some form of replication software for Windows Server (if it doesn't have it to begin with) that can automatically replicate the data on a schedule and send the copies to the FreeNAS box over the internet via SSH/SFTP.

rsync over ssh would probably be the go to for that.

 

https://heejune.me/2018/08/02/setup-rsync-server-over-ssh-on-windows-server-2012-easy-way/

Link to post
Share on other sites
Posted · Original PosterOP
2 hours ago, leadeater said:

It says the easy way but it's still going over my head. And apparently it's all cli which means even if I manage to install rsync I still have to learn how to configure & enable it.

 

Well, I don't actually NEED this for quite some time to come so I guess I'll just have to sit down and find the time to learn.

 

I also now have a powerful VM server running Proxmox so I can test rsync extensively in an isolated environment before I actually deploy it.

 

I hope it works out. In the meantime I'm searching for possible alternatives. Having options is always nice.

Link to post
Share on other sites

Oh nice! Glad i followed this a while back or else i would have forgotten about it :P 

Looking real good so far. And I am just drooling at all those SSDs. Although I couldn't even begin to consider what I would use to saturate that massive uplink. My 1gb teamed links are more then enough as is XD. 


Use this guide to fix text problems in your postGo here and here for all your power supply needs

 

New Build Currently Under Construction! See here!!!! -----> 

 

Spoiler

Deathwatch:[CPU I7 4790K @ 4.5GHz][RAM TEAM VULCAN 16 GB 1600][MB ASRock Z97 Anniversary][GPU XFX Radeon RX 480 8GB][STORAGE 250GB SAMSUNG EVO SSD Samsung 2TB HDD 2TB WD External Drive][COOLER Cooler Master Hyper 212 Evo][PSU Cooler Master 650M][Case Thermaltake Core V31]

Spoiler

Cupid:[CPU Core 2 Duo E8600 3.33GHz][RAM 3 GB DDR2][750GB Samsung 2.5" HDD/HDD Seagate 80GB SATA/Samsung 80GB IDE/WD 325GB IDE][MB Acer M1641][CASE Antec][[PSU Altec 425 Watt][GPU Radeon HD 4890 1GB][TP-Link 54MBps Wireless Card]

Spoiler

Carlile: [CPU 2x Pentium 3 1.4GHz][MB ASUS TR-DLS][RAM 2x 512MB DDR ECC Registered][GPU Nvidia TNT2 Pro][PSU Enermax][HDD 1 IDE 160GB, 4 SCSI 70GB][RAID CARD Dell Perc 3]

Spoiler

Zeonnight [CPU AMD Athlon x2 4400][GPU Sapphire Radeon 4650 1GB][RAM 2GB DDR2]

Spoiler

Server [CPU 2x Xeon L5630][PSU Dell Poweredge 850w][HDD 1 SATA 160GB, 3 SAS 146GB][RAID CARD Dell Perc 6i]

Spoiler

Kero [CPU Pentium 1 133Mhz] [GPU Cirrus Logic LCD 1MB Graphics Controller] [Ram 48MB ][HDD 1.4GB Hitachi IDE]

Spoiler

Mining Rig: [CPU Athlon 64 X2 4400+][GPUS 9 RX 560s, 2 RX 570][HDD 160GB something][RAM 8GBs DDR3][PSUs 1 Thermaltake 700w, 2 Delta 900w 120v Server modded]

RAINBOWS!!!

 

 QUOTE ME SO I CAN SEE YOUR REPLYS!!!!

Link to post
Share on other sites

Nitpick: you will lose memory performance if you are using just one stick of RAM per processor. Those Xeons uses quad-channel RAM, so with two of them you need 8 matching sticks of memory for best performance.


DreamCorvette: Xeon E3-1231v3 ~ 4x Kingston KVR 8GB DDR3-1600 ~ Gigabyte GA-Z97M-D3H ~ Sapphire RX 580 8GB ~ Intel SSD 520 Series 480GB ~ WD Green 6TB ~ macOS Mojave amd64
Acorn: 2x Xeon E5-2680 ~ 8x Kingston KVR 16GB DDR3-1600 ECC ~ Asus Z9PE-D16C/2L ~ AMD R9 380 8GB ~ WD Black NVMe 1TB ~ Asus PIKE 2008 ~ 4x WD Red 3TB ~ HGST 3TB NAS ~ Windows 10 Pro Workstation amd64
NASter: Core 2 Quad Q9550S ~ 4x Micron 2GB DDR2-800 Unbuffered ECC ~ Asus P5BV-C ~ Broadcom MegaRAID 9271-8iCC ~ Kingston SSDNow V300 240GB ~ 6x WD Green 2TB ~ 2x WD Red 2TB ~ Ubuntu Server 18.04 LTS amd64
Battlebird: Apple MacBookPro9.2 ~ Core i5-3210M ~ 2x Hynix 4GB DDR3-1600 SO-DIMM ~ Samsung SSD 850 Evo 1TB ~ macOS Mojave amd64

Rachel: Apple MacBookPro8,1 ~ Core i7-2620M ~ 2x Samsung 4GB DDR3-1600 SO-DIMM ~ Kingston SSDNow V300 60GB ~ Samsung SpinPoint 1TB ~ macOS High Sierra amd64
Dinosaur: Dell Latitude D620 ~ Core 2 Duo T7400 ~ Kingston 2GB DDR2-800 SO-DIMM ~ Kingston 1GB DDR2-800 SO-DIMM ~ Intel 945PM ~ nVidia Quadro NVS 110M ~ Kingston SSDNow V300 240GB ~ Windows 10 Pro amd64
RavineAudio: Raspberry Pi Bodel B ~ ARM1176JZF-S ~ Elpida mDDR2 512MB ~ Broadcom BCM2835 ~ Broadcom VideoCore IV ~ SanDisk Extreme 8GB microSD ~ Wolfson Audio Card ~ Raspbian Server armv6f

Link to post
Share on other sites
7 minutes ago, maxtch said:

Nitpick: you will lose memory performance if you are using just one stick of RAM per processor. Those Xeons uses quad-channel RAM, so with two of them you need 8 matching sticks of memory for best performance.

Probably not going to matter for this, I've even seen commercial storage products do similar in respect to ram channels. Grates me every time I see it.

Link to post
Share on other sites
9 minutes ago, leadeater said:

Probably not going to matter for this, I've even seen commercial storage products do similar in respect to ram channels. Grates me every time I see it.

Depending on what you do. If it is a pure storage server the memory bandwidth won’t matter much. But if you compute on it, it becomes very taxing on the memory bandwidth.

 

I have an ex-server workstation I compile code on. Compiling code with aggressive optimizations on is a strictly CPU-only, integer-only, high throughput, high IOPS compute task if parallelized. Not only do I need octa-channel memory, but also NVMe SSD, to keep my dual Xeon E5-2680 fed.


DreamCorvette: Xeon E3-1231v3 ~ 4x Kingston KVR 8GB DDR3-1600 ~ Gigabyte GA-Z97M-D3H ~ Sapphire RX 580 8GB ~ Intel SSD 520 Series 480GB ~ WD Green 6TB ~ macOS Mojave amd64
Acorn: 2x Xeon E5-2680 ~ 8x Kingston KVR 16GB DDR3-1600 ECC ~ Asus Z9PE-D16C/2L ~ AMD R9 380 8GB ~ WD Black NVMe 1TB ~ Asus PIKE 2008 ~ 4x WD Red 3TB ~ HGST 3TB NAS ~ Windows 10 Pro Workstation amd64
NASter: Core 2 Quad Q9550S ~ 4x Micron 2GB DDR2-800 Unbuffered ECC ~ Asus P5BV-C ~ Broadcom MegaRAID 9271-8iCC ~ Kingston SSDNow V300 240GB ~ 6x WD Green 2TB ~ 2x WD Red 2TB ~ Ubuntu Server 18.04 LTS amd64
Battlebird: Apple MacBookPro9.2 ~ Core i5-3210M ~ 2x Hynix 4GB DDR3-1600 SO-DIMM ~ Samsung SSD 850 Evo 1TB ~ macOS Mojave amd64

Rachel: Apple MacBookPro8,1 ~ Core i7-2620M ~ 2x Samsung 4GB DDR3-1600 SO-DIMM ~ Kingston SSDNow V300 60GB ~ Samsung SpinPoint 1TB ~ macOS High Sierra amd64
Dinosaur: Dell Latitude D620 ~ Core 2 Duo T7400 ~ Kingston 2GB DDR2-800 SO-DIMM ~ Kingston 1GB DDR2-800 SO-DIMM ~ Intel 945PM ~ nVidia Quadro NVS 110M ~ Kingston SSDNow V300 240GB ~ Windows 10 Pro amd64
RavineAudio: Raspberry Pi Bodel B ~ ARM1176JZF-S ~ Elpida mDDR2 512MB ~ Broadcom BCM2835 ~ Broadcom VideoCore IV ~ SanDisk Extreme 8GB microSD ~ Wolfson Audio Card ~ Raspbian Server armv6f

Link to post
Share on other sites
13 minutes ago, maxtch said:

Depending on what you do. If it is a pure storage server the memory bandwidth won’t matter much. But if you compute on it, it becomes very taxing on the memory bandwidth.

 

I have an ex-server workstation I compile code on. Compiling code with aggressive optimizations on is a strictly CPU-only, integer-only, high throughput, high IOPS compute task if parallelized. Not only do I need octa-channel memory, but also NVMe SSD, to keep my dual Xeon E5-2680 fed.

That's why I said for this, storage isn't exactly a big compute or memory system workload though there are particular software defined storage technologies that can be like very high performance erasure coding.

Link to post
Share on other sites
10 minutes ago, leadeater said:

That's why I said for this, storage isn't exactly a big compute or memory system workload though there are particular software defined storage technologies that can be like very high performance erasure coding.

If, for example, you implement on-the-fly full-disk encryption, you need strong compute. Although for a storage server it is better to implement that on the client side so the server never sees the cryptographic keys. Or if your server also hosts a DBMS compute is needed to deal with the requests.


DreamCorvette: Xeon E3-1231v3 ~ 4x Kingston KVR 8GB DDR3-1600 ~ Gigabyte GA-Z97M-D3H ~ Sapphire RX 580 8GB ~ Intel SSD 520 Series 480GB ~ WD Green 6TB ~ macOS Mojave amd64
Acorn: 2x Xeon E5-2680 ~ 8x Kingston KVR 16GB DDR3-1600 ECC ~ Asus Z9PE-D16C/2L ~ AMD R9 380 8GB ~ WD Black NVMe 1TB ~ Asus PIKE 2008 ~ 4x WD Red 3TB ~ HGST 3TB NAS ~ Windows 10 Pro Workstation amd64
NASter: Core 2 Quad Q9550S ~ 4x Micron 2GB DDR2-800 Unbuffered ECC ~ Asus P5BV-C ~ Broadcom MegaRAID 9271-8iCC ~ Kingston SSDNow V300 240GB ~ 6x WD Green 2TB ~ 2x WD Red 2TB ~ Ubuntu Server 18.04 LTS amd64
Battlebird: Apple MacBookPro9.2 ~ Core i5-3210M ~ 2x Hynix 4GB DDR3-1600 SO-DIMM ~ Samsung SSD 850 Evo 1TB ~ macOS Mojave amd64

Rachel: Apple MacBookPro8,1 ~ Core i7-2620M ~ 2x Samsung 4GB DDR3-1600 SO-DIMM ~ Kingston SSDNow V300 60GB ~ Samsung SpinPoint 1TB ~ macOS High Sierra amd64
Dinosaur: Dell Latitude D620 ~ Core 2 Duo T7400 ~ Kingston 2GB DDR2-800 SO-DIMM ~ Kingston 1GB DDR2-800 SO-DIMM ~ Intel 945PM ~ nVidia Quadro NVS 110M ~ Kingston SSDNow V300 240GB ~ Windows 10 Pro amd64
RavineAudio: Raspberry Pi Bodel B ~ ARM1176JZF-S ~ Elpida mDDR2 512MB ~ Broadcom BCM2835 ~ Broadcom VideoCore IV ~ SanDisk Extreme 8GB microSD ~ Wolfson Audio Card ~ Raspbian Server armv6f

Link to post
Share on other sites
12 minutes ago, maxtch said:

Or if your server also hosts a DBMS

Then it isn't just a storage server 😉, nor would I refer to it as one. I more commonly see remote block storage for database servers though so compute and storage are separated. Full ram channel allocation is best but in a lot of cases it's far from required, I just hate not seeing it done as I would irrespective of what the server is doing. Just size the DIMMs to meet the total capacity you're after, coming across servers with 128GB/256GB ram running one or two channels down is just disheartening to see because it's so easily avoidable.

 

Edit:

An nobody really actually adds more ram to the system later if they planned to and it's why the channels weren't populated to start with.

Link to post
Share on other sites
Posted · Original PosterOP
8 hours ago, 8uhbbhu8 said:

Oh nice! Glad i followed this a while back or else i would have forgotten about it :P 

Looking real good so far. And I am just drooling at all those SSDs. Although I couldn't even begin to consider what I would use to saturate that massive uplink. My 1gb teamed links are more then enough as is XD. 

My first real use of the full 20Gbit connection was to transfer 80GB of files.

 

The transfer took <1min.

 

5 hours ago, maxtch said:

Nitpick: you will lose memory performance if you are using just one stick of RAM per processor. Those Xeons uses quad-channel RAM, so with two of them you need 8 matching sticks of memory for best performance.

If you read though the topic you'll see that 1 stick per CPU is in no way the final configuration. It's just that each stick costs at least $500 per (usually a lot more) so I didn't plan on waiting for the spare money to buy sticks when the system can function with just 2. Network transfers aren't bottlenecked by RAM and my use case isn't RAM intensive so it's fine for now. More RAM will definitely come in the future though. That I guarantee I want quad channel.

Link to post
Share on other sites
2 minutes ago, Windows7ge said:

My first real use of the full 20Gbit connection was to transfer 80GB of files.

 

The transfer took <1min.

I wish I had that kind of transfer speeds when I am restoring my backups.

3 minutes ago, Windows7ge said:

If you read though the topic you'll see that 1 stick per CPU is in no way the final configuration. It's just that each stick costs at least $500 per (usually a lot more) so I didn't plan on waiting for the spare money to buy sticks when the system can function with just 2. Network transfers aren't bottlenecked by RAM and my use case isn't RAM intensive so it's fine for now. More RAM will definitely come in the future though. That I guarantee I want quad channel.

TL;DR, sorry. Just keep in mind that some workloads you might throw at that processor can be demanding on memory bandwidth, for example DBMS, for example code compile with aggressive optimization and parallelized to the thread.


DreamCorvette: Xeon E3-1231v3 ~ 4x Kingston KVR 8GB DDR3-1600 ~ Gigabyte GA-Z97M-D3H ~ Sapphire RX 580 8GB ~ Intel SSD 520 Series 480GB ~ WD Green 6TB ~ macOS Mojave amd64
Acorn: 2x Xeon E5-2680 ~ 8x Kingston KVR 16GB DDR3-1600 ECC ~ Asus Z9PE-D16C/2L ~ AMD R9 380 8GB ~ WD Black NVMe 1TB ~ Asus PIKE 2008 ~ 4x WD Red 3TB ~ HGST 3TB NAS ~ Windows 10 Pro Workstation amd64
NASter: Core 2 Quad Q9550S ~ 4x Micron 2GB DDR2-800 Unbuffered ECC ~ Asus P5BV-C ~ Broadcom MegaRAID 9271-8iCC ~ Kingston SSDNow V300 240GB ~ 6x WD Green 2TB ~ 2x WD Red 2TB ~ Ubuntu Server 18.04 LTS amd64
Battlebird: Apple MacBookPro9.2 ~ Core i5-3210M ~ 2x Hynix 4GB DDR3-1600 SO-DIMM ~ Samsung SSD 850 Evo 1TB ~ macOS Mojave amd64

Rachel: Apple MacBookPro8,1 ~ Core i7-2620M ~ 2x Samsung 4GB DDR3-1600 SO-DIMM ~ Kingston SSDNow V300 60GB ~ Samsung SpinPoint 1TB ~ macOS High Sierra amd64
Dinosaur: Dell Latitude D620 ~ Core 2 Duo T7400 ~ Kingston 2GB DDR2-800 SO-DIMM ~ Kingston 1GB DDR2-800 SO-DIMM ~ Intel 945PM ~ nVidia Quadro NVS 110M ~ Kingston SSDNow V300 240GB ~ Windows 10 Pro amd64
RavineAudio: Raspberry Pi Bodel B ~ ARM1176JZF-S ~ Elpida mDDR2 512MB ~ Broadcom BCM2835 ~ Broadcom VideoCore IV ~ SanDisk Extreme 8GB microSD ~ Wolfson Audio Card ~ Raspbian Server armv6f

Link to post
Share on other sites
Posted · Original PosterOP
1 hour ago, maxtch said:

TL;DR, sorry. Just keep in mind that some workloads you might throw at that processor can be demanding on memory bandwidth, for example DBMS, for example code compile with aggressive optimization and parallelized to the thread.

Shortened: This RAM is expensive but quad channel WILL be achieved in the future. I'll @ you about it.

Link to post
Share on other sites
Posted · Original PosterOP

I thought I would give an update about the whole Supermicro making me give them a Security Deposit for cross-shipment situation (no other replacement option was available).

 

After having the defective board officially back in their possession for the past 8 days Supermicro finally approved and refunded me (I only had to send them an e-mail asking what was going on). They even refunded the state tax associated with the "purchase" which I did not expect to have returned.

 

As much as I like the idea of having a replacement board overnighted to me the day after submitting an RMA I REALLY did not like having to put down over $650, going through a very detailed and hard to navigate RMA process then being told:

  • We may not refund you
  • The replacement board may be refurbished
  • If the board is irreparable you have to pay to have the defective board shipped back to you (if I asked for repair)

Thankfully despite the rather complicated RMA procedure that I near screwed up about 3 times I got refunded.

 

To be honest for future servers I'm probably going to go back to ASRock Rack even if their boards aren't "real" server motherboards. I've worked with their RMA service. It's like any other desktop motherboard manufacturer, no security deposit weirdness and their boards have worked pretty well for me. Unless I end up building something that requires a VERY specific motherboard layout or needs a VERY specific list of features of which Supermicro is the only manufacturer then I don't think I'm going to buy from them again. Not with an RMA service like this and telling me I may not even get my money back if the return is unsatisfactory.

Link to post
Share on other sites
Posted · Original PosterOP

This update is going to be a bit of a rant.

 

The past four days have been nothing but trying things and failing, failing more then trying more things until finally I got something to work between the Windows server and a FreeNAS VM (it's only in a VM for testing purposes - don't use FreeNAS in a VM for regular use).

 

To let it be known immediately all hopes of a GUI replication program that support replication using SSH/SFTP/Rsync (for free) failed.

 

Like leadeater suggested (and Bitter had mentioned for an unrelated issue) using Rsync seems to be the only real free option for cross-platform automatic synchronization via standard networking protocols

 

To get it working though for my exact configuration wasn't easy. First-off the "Setup Rsync Server Easy Way" guide was in no way designed for your average user. Throughout the guide there are whole sections of information that are excluded because the author expects you to already know how to do it. Most of which I did not and ultimately I had to abandon it because near the end when you need to fetch the rsync.exe file the hyperlinks take you to dead sites.

 

I did get something out of it though.

 

Screenshot_1.png.8aef0b5fe12beb70965075055b047088.png

That is a command prompt accessed via SSH using PuTTY. That is so cool. I never thought I could remote into the Windows equivalent of a Linux terminal.

 

What I found was a program called Cygwin which installs a Terminal like environment on windows. Two of the supported packages of which are Rsync & SSH.

 

Understanding how to configure Rsync I knew was going to be the biggest pain in the ass but I managed to find a website that explained most of how to set it up. A lucky find. With that my test backup script looks like this:

#!/bin/bash
rsync -avzhP -e 'ssh -p 22' --delete  --stats --log-file=/cygdrive/c/Users/Administrator/Desktop/backup-log/backup-`date +”%F-%I%p”`.log /cygdrive/d/Shares/Storage/users/jon root@192.168.0.247:/mnt/test/

If you're looking at that and don't understand any of it then you know how I felt while figuring out how to write it.

 

So, great we have a working script. We're done...right?...

 

Nope, we need to automate it but we have an issue of while authenticating the session with the FreeNAS server it prompts you for the user password on the server. That's a problem because we don't want to have to authenticate manually to run automatic backups. The solution? Public/Private key authentication w/out password protection. That sounds bad but it's stronger than plain password authentication and it lets it run automatically but in order to run the script and have the user authenticate itself requires a whole other set of setup instructions without of which you'll get an error of "Permission denied (publickey)".

 

OK, so now we can run the backup script and the server automatically authenticates us. Great. Now we need to setup a Windows Task to run it.

 

But guess what.

 

Our script isn't an .exe, in fact scripts don't use file extensions at all and it won't run if you tell windows to open it using our simulated terminal. So how do we add it to a task? Well when executing an application within Task Scheduler it has a field that allows you to add an argument. So NOW we have to figure out the CLI argument that will make the program launch the script upon startup.

 

And that argument is:

/bin/bash -l -e '/cygdrive/c/cygwin64/home/administrator/rsync-backup-script'

 

So with that figured out:

  1. Windows Task Scheduler launches our simulated Terminal
  2. The Terminal automatically runs our backup script
  3. The Rsync application automatically SSH's into the server
  4. Rsync synchronizes our data, outputs a log, closes the session, then the terminal exits on it's own.

So Rsync over SSH on Windows once you have it figured out is a very nice tool but I have to say that NO SINGLE WEBSITE HAD A GUIDE ON THIS! I had to go over several websites and forums to locate the next step to getting the whole configuration running.

 

I am going to write a guide and post it here on the forum. Clear and collected instructions need to exist on the Internet for this.

 

Alright I'm done ranting. Next on our list of things left to do is I need some UPS's. There's not a whole lot to them but I'll go over them when I put in the order and they show up.

Link to post
Share on other sites

Making tools made for Linux/Unix work with Windows is always a pain, unless there's an easy Windows side version that integrates with the Linux side stuff. I remember mucking about with RDP way back about 10 years ago getting it working both ways between Win/nix over gigabit for several computers in a room, it was cheaper than a KVM switch! For in Windows and Linux because I'm lazy I use grsync which is a GUI for rsync because easy clicking, but that's just for local stuff as far as I know, however if the target drive is mapped to Windows then I don't see why you couldn't use it on the local machine to run as a task to execute a backup but you know a lot more about this than I do. I just plug a big hard drive into my dumb boxes, start grsync working, and it's done when I wake up from my nap.

 

http://www.opbyte.it/grsync/

Link to post
Share on other sites
Posted · Original PosterOP
2 hours ago, Bitter said:

Making tools made for Linux/Unix work with Windows is always a pain, unless there's an easy Windows side version that integrates with the Linux side stuff. I remember mucking about with RDP way back about 10 years ago getting it working both ways between Win/nix over gigabit for several computers in a room, it was cheaper than a KVM switch! For in Windows and Linux because I'm lazy I use grsync which is a GUI for rsync because easy clicking, but that's just for local stuff as far as I know, however if the target drive is mapped to Windows then I don't see why you couldn't use it on the local machine to run as a task to execute a backup but you know a lot more about this than I do. I just plug a big hard drive into my dumb boxes, start grsync working, and it's done when I wake up from my nap.

 

http://www.opbyte.it/grsync/

For local synchronization I use FreeFileSync. This project requires the ability to synchronize over the internet using the SSH protocol and relies on public/private key authentication to boot so Grsync won't work. I appreciate the suggestion though.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×