ZFS is a filesystem developed by Sun Microsystems and released under one of Sun's open source licenses. It was ported over to BSD, OpenBSD, FreeBSD, FreeNAS, and a couple Linux distributions (the linux ports only ran in user space, not kernel space as that was not compatible with the GPL license, thus performance under linux suffers).
ZFS is pretty cool, but as stated, you really need to run either Solaris, OpenSolaris, or one of the BSD's to really utilize it and as such, need to be familiar with Unix administration.
Basically, ZFS allows filesystems up to 16 Exabytes (16 million terabytes) in size, containing up to 281 trillion files. It is a software disk management system as well, allowing you to create a pool of disks which you can configure for performance, redundancy, etc., and create multiple filesystems on the same disk pool (in other words, allows you to present the group of disks as one or multiple disks). ZFS uses system RAM to cache reads and write operations which gives a dramatic performance boost when dealing with files smaller than the amount of RAM on the machine (i.e. your write speed to ZFS will be in the GB's per second assuming you can get data that fast to the system up until you hit the size of your RAM, and depending on your system tunings (i.e. no ZIL log, or separate ZIL log on fast SSD drive(s))). You can also use SSD drives to expand the read/write cache above and beyond the size of your RAM. You can create RAID 0, RAID 1, RAID 5, RAID 6, straight disks, concatenations, hot spares, and pretty much any combination there-in. At work I have setup a ZFS filesystem that uses 4 external disk arrays of 28 disks each such that there are multiple RAID 6 devices striped together in such a way that you can lose one of the 4 disk arrays and the data would still be available (losing an entire array is something entirely possible, maybe the system board dies, or both power supplies fail, etc). The flexibility that ZFS gives you in the creation of the disk pool also allows you to expand the number of disks in a pool and increase you space over time.
ZFS also is one of the only filesystems which creates and saves a checksum of the data so it can tell you if your data has been corrupted (and exactly what files). This is something that most people do not even think about, but in larger and larger systems, the chance that there is a bit-flip on a disk gets more and more likely simply due to the sheer amount of bits that are being stored. There are lots of things which can cause this, from electro-magnetic fields, radiation, even physical shock, and in other filesystems, you would never know it happened other than things might start crashing, or failing, or just don't look right. It supports creating snapshots, so say you want to make a change to your system, well, you can create a snapshot before you make the change, and if you don't like the change, you can simply roll back to the snapshot, or even just copy the individual files from the snapshot to essentially go back in time to that previous version of the file. You can set it up that snapshots happen on a schedule (i.e. daily, weekly, monthly, keep X daily, Y weekly, Z monthly and remove the oldest, etc). Sharing out to windows machines is extremely easy with just a couple commands that need to be run.
Now for the setup I intend to use, I would install disks vertically in the case I linked, which would spread the drives across all 6 SAS backplanes, and the 6 ports on the 3 SAS controller cards. This would maximize the performance of the drives as the load would be spread evenly across everything. I intend to create up to 4 groups of 6 disk raidz (raidz is effectively raid 5) which are then striped together (raid 0), so essentially a raid5+0 setup. I would be able to lose 1 disk out of each vertical columns of disks without losing data in that setup.
Why am I getting lectured about Final Fantasy? Eat shit how about that.