Results 1 to 22 of 22
Hybrid View
-
04-28-2015, 05:39 AM #1
Web Hosting Master
- Join Date
- Apr 2006
- Posts
- 931
ZFS for VM storage in production environments
Hi, I've been watching for years that ZFS is used in FreeNAS but do not know anyone who has used for virtualization in production environments.What is your experience with the port ZFSonLinux? I've been testing with the aim to reuse some old servers without RAID controller (Supermicro X8DTi, Dual Xeon X5670 and 6 SSD drives in ZFS RAID10). I have 8GB of RAM allocated to ZFS and 16GB for Zvol. In my tests ZFS has been reached over 2GB/s for 100GB files on writing tests with dd.I know that ZFS bases its operation on memory, but as it does to get these results if I have limited RAM and ZVOL to a smaller size?Thank you.
-
04-28-2015, 06:44 AM #2
Web Hosting Master
- Join Date
- Oct 2003
- Location
- The Netherlands
- Posts
- 1,271
Hi,
This is something of great interest to me, but I've decided that ZOL just isn't fully there yet for production.
As for your question. You can never have enough RAM for ZFS, but since you work on SSD's exclusively I think the value of the ARC is a bit limited.
I'm guessing you are getting the 2 GB/s on the strength of those SSD's (Which ones are you using if you don't mind me asking?)
One of the things you'll run in to with ZOL is that it doesn't support TRIM (I believe only FreeBSD does at this point)
-
04-28-2015, 07:33 AM #3
Web Hosting Master
- Join Date
- Apr 2006
- Posts
- 931
The difficulty is to know the recommended limit for a production environment without any performance penalty, I made the following test.
- Create a KVM VM (Proxmox) with unlimited I/O and executed stress test for hard drives (command stress -d 60). Host iowait remained above 2%.
- I perform a writing test of 100GB file on the host and the bandwidth ranged from 1.5GB / s to 90MB / s so there are a bottleneck somewhere.
It's a shame not to use ZFS, would be a good way to reuse hardware that still offers good performance and in the case of these Supermicro motherboards support up to 192GB of memory.
-
04-28-2015, 04:15 PM #4
Web Hosting Guru
- Join Date
- Apr 2012
- Posts
- 275
What about using SmarOS or some other Solaris derivative? I mean ZFS is not exactly a "fringe" technology since big, high performance, production Oracle environments use it.
Also, if you are using decent SSDs and not commodity consumer drives, modern garbage collection is excellent.My site dedicated to server and workstation hardware: http://www.servethehome.com
-
04-28-2015, 10:58 PM #5
Junior Guru Wannabe
- Join Date
- Sep 2006
- Location
- Michigan
- Posts
- 66
MNX.io - 100% SSD Cloud Hosting, and Linux server management.
-
04-29-2015, 07:43 AM #6
Web Hosting Master
- Join Date
- Oct 2003
- Location
- The Netherlands
- Posts
- 1,271
I was thinking of trying this on one of my older machines. The list of tech looks impressive, but I'm not sure about the HW support.
Would definitely love to get my VPS backed by ZFS. The snapshots and send / receive are really awesome tools
Not sure how far Behyve is, but I would miss the KVM hypervisor
-
04-30-2015, 01:12 AM #7
Backup Guru
- Join Date
- Feb 2002
- Location
- New York, NY
- Posts
- 4,618
My ZFS experience is with FreeBSD (and earlier on Solaris), but I've talked to several ZoL users, and they seem to think that it's quite usable now. If you want to do a more advanced virtualization setup, you could use Xen with a FreeBSD-based storage domain. FreeBSD has a Xen blkback driver, so it can export devices back to dom0 or other VMs. If your server has an IOMMU, then you can even give FreeBSD direct access to your HBA.
Bhyve has come a long way. If you're familiar with KVM, then you shouldn't have much trouble using Bhyve.
https://wiki.freebsd.org/bhyveScott Burns, President
BQ Internet Corporation
Remote Rsync and FTP backup solutions
*** http://www.bqbackup.com/ ***
-
04-28-2015, 11:12 PM #8
Web Hosting Evangelist
- Join Date
- Sep 2004
- Location
- Cluj-Napoca, Romania
- Posts
- 504
ZFS on Linux used to have some nasty bug that it would eat all RAM and some more. It's fixed 2 (?) versions ago.
We started using ZFS on Linux with KVM and OpenStack 2-3 months ago and it's all good.
We knew another provider that has used it in thw last 12 and 18 months and we got informed about all the bumps.
We also used Solaris and FreeBSD before.
We still have a backup server with FreeBSD where we backup CPanel accounts over iSCSI and benefit of snapshots, compression, deduplication. Many problems with this server, but might be hardware caused.
-
04-28-2015, 11:26 PM #9
Web Hosting Master
- Join Date
- Mar 2012
- Posts
- 1,425
i like this thread. nice. finally something of interest.
--
-
04-29-2015, 12:28 AM #10
Web Hosting Master
- Join Date
- Mar 2003
- Location
- chicago
- Posts
- 1,790
why not use it on a OS its been used on for years like freebsd ?
-
04-30-2015, 05:31 AM #11
Web Hosting Master
- Join Date
- Oct 2003
- Location
- The Netherlands
- Posts
- 1,271
Thanks Scott, valuable feedback.
There was a new release of ZOL, but feedback has been a bit mixed. I contacted somebody who did various tests on KVM performance between ZFS, BTRFS, EXT4 etc (Read his helpful article here: http://www.ilsistemista.net/index.ph...omparison.html )
He did mention that he would be reluctant to use ZOL in production for now, but then again I would be reluctant offering a stranger on the internet any advice as well.
I didn't know Xen could do that, that's interesting!
Also, Bhyve does look interesting. I'm guessing it will see a lot of love with Release 11 as well
-
04-30-2015, 06:45 AM #12
Web Hosting Master
- Join Date
- Apr 2006
- Posts
- 931
Barry, very interesting comparison but it is not very useful evaluate ZFS without a good amount of RAM allocated or SSD cache for L2ARC because their main advantages over other file systems are eliminated.
-
04-30-2015, 07:34 AM #13
Web Hosting Master
- Join Date
- Oct 2003
- Location
- The Netherlands
- Posts
- 1,271
Well that would be one of the advantages of ZFS over the others. Still it's good to compare on the exact same HW, but I agree that I would include more memory and a L2ARC for doing VPS since that's where a lot of your extra performance will be coming from. Not sure if this still applies, but one of the issues with ZOL is that ARC and Linux disk cache are separate, meaning you'll have some duplication.
-
05-18-2015, 12:35 AM #14
Newbie
- Join Date
- Sep 2014
- Posts
- 5
We run 40+ production VMs on FreeNAS with zero issues. All VMs are on ESXi via iSCSI, over 6+ months without a single hiccup. The FreeNAS GUI just makes admin much easier I think. Give it plenty of RAM and ZFS will perform excellent, and you can always tweek with L2ARC or ZOL if needed.
-
05-18-2015, 12:28 PM #15
Web Hosting Master
- Join Date
- Apr 2004
- Location
- Singapore
- Posts
- 1,234
We run production VM (VMware or KVM) connected to FreeBSD/ZFS storage via NFS, works fine for years.
Only use compression and snapshot feature, no deduplication.
For best performance:
1 SSD for read cache (can be any SSD with large capacity)
1 SSD for zil/slog (as write journaling), using Intel DC S3700 SSD, use the best performance SSD you can get and with super capacitor for data protection.
The data disk can be SATA or SSD up to your choices.
AlanAlan Woo, alan [@] ne.com.sg
= NewMedia Express Pte Ltd (AS38001)
= IP Transit, Colocation & Dedicated Servers in Singapore | Hong Kong | Tokyo | Seoul | Jakarta |
= Singapore Speedtest speedtest.sg
-
05-27-2015, 12:57 PM #16
Junior Guru Wannabe
- Join Date
- Oct 2006
- Location
- Colorado Springs, CO
- Posts
- 78
Similar to others, we have a Xen cluster with shared storage from a FreeNAS unit. It's a 24-bay SuperMicro affair and we built it like so:
20x 2TB SATA 7200RPM drives (Seagate cosntellations if I recall)
2x Intel dc3700 100GB SSDs (supercaps) for ZIL (redundant)
2x Samsung 840 512GB SSDs for L2 ARC
A LSI2308 based sata controller, a 10gbe card, misc fixins
We got a lot of our know-how from zfsbuild.com. This SAN has been particularly fast and easy to maintain. Only hiccup we had was that every 35 days, the following Sunday morning, the automatic ZFS scrub would take place and randomly stall I/O to some Linux VMs, taking them offline. We now do those scrubs manually every 60 days during scheduled maintenance.
The downside is that it's an expensive piece of kit to build, and its performance does not distribute/scale super well, so this beast is probably the last of its kind for us.Randal Kohutek | Ops Guy
Data102 - Colorado's Public Datacenter
Virtual Private Servers | Colocation | VoIP Hosted Handsets | Email Firewalls | And More!
www.Data102.com | (888)-Data102 | Call us Today!
-
05-24-2015, 01:22 AM #17
Premium Member
- Join Date
- Jul 2003
- Location
- Waterloo, Ontario
- Posts
- 1,135
This thread came at the perfect time, I've looked into setting up a ZFS NAS for a while now and finally freed up some time to get into it again
I've been looking for some JBOD SuperMicro chassis if anybody has a recommendation for something low cost. I have some 800GB Intel S3700 that I want to use for ZIL/L2ARC and some 16 core AMD chips that I can throw in there to test out compression and/or deduplication.
If anybody can make some suggestions, I'd be happy to collaborate!Right Servers Inc. - Fully Managed VPS | Fully Managed Bare Metal Servers. Hosting for entrepreneurs, by entrepreneurs. Features you can't ignore: High Availability| VMWare| NVMe| DirectAdmin & Softaculous Premium| BitNinja A Security Suite| Daily Backups| 24/7 Monitoring| Software Installation, Configuration & troubleshooting| Website Migrations | 30 Day Money Back Guarantee
-
05-24-2015, 06:57 AM #18
Web Hosting Master
- Join Date
- Apr 2006
- Posts
- 931
Hello, in your experience is good idea this ZFS setup under SATA 3Gb interface?
4 x 3TB RAID10
2 x 256GB SSD RAID1 (for L2ARC and ZIL)
-
05-24-2015, 01:23 PM #19
Web Hosting Master
- Join Date
- Apr 2004
- Location
- Singapore
- Posts
- 1,234
Ideally the SSD shall connect to 6gbps sata, as some SSD can easily hit 500MB/s but 3Gbps is fine, unless you want very high sequential throughput.
Alan Woo, alan [@] ne.com.sg
= NewMedia Express Pte Ltd (AS38001)
= IP Transit, Colocation & Dedicated Servers in Singapore | Hong Kong | Tokyo | Seoul | Jakarta |
= Singapore Speedtest speedtest.sg
-
05-25-2015, 07:14 AM #20
Web Hosting Master
- Join Date
- Apr 2006
- Posts
- 931
What do you think of use single SSD MLC PCIe and use it for L2ARC and ZIL?
-
05-27-2015, 08:56 AM #21
Junior Guru
- Join Date
- Jan 2003
- Location
- Budapest, Hungary
- Posts
- 236
1 don't use single SSD for L2ARC and ZIL altogether. It might become a bottleneck.
1 device per each function.
ARC is a priority in space give it a lot. ZIL does not use much so around 8GB will be enough.██ ServerAstra.com website / e-mail: info @ serverastra.com
██ HU/EU Co-Location / Managed and Unmanaged Cloud & Dedicated servers in Hungary with unmetered connections
-
05-27-2015, 04:51 PM #22
Web Hosting Master
- Join Date
- Apr 2006
- Posts
- 931
I will use in my setup a new Intel 750 SSD PCIe, 430k iops read and 230k write, should be enough.
Similar Threads
-
Your experience with OpenQRM in production environments
By Infinitnet in forum Cloud HostingReplies: 4Last Post: 12-13-2011, 02:25 PM -
Openstack in production environment
By ukiroo in forum Cloud HostingReplies: 7Last Post: 06-18-2011, 06:08 AM -
Recommendation for Monitoring & Configuration in colo environment
By EastCoast in forum Colocation, Data Centers, IP Space and NetworksReplies: 1Last Post: 06-16-2011, 03:35 PM -
BurstNet vps in production environment, recommended ?
By galacticzilla in forum VPS HostingReplies: 36Last Post: 05-11-2010, 08:20 AM -
Blue Quartz in Production Environment
By huck in forum Dedicated ServerReplies: 0Last Post: 11-22-2004, 08:43 AM