Best (disks) for fileserver?

Thedog

Active Member
Reaction score
56
Hello,

I met a new client today which basically do some heavy CAD and their main concern is that their server is very slow (they open files on the server). They want me to suggest a new solution and my main idea was a Linux machine using 2xSSD disk in RAID-0 but Im a bit unsure on using SSD.

I know that the read / write on an SSD is much better but how would that work when multiple clients connect? Another solution I was thinking about is to use a Windows server and then use offline files on the clients. What is your take?
 
How does the network setup look? Full gig network with good brand name gig switch? What is the average size of the files they are opening?

When you say they open files on the server - I'm assuming you mean they open on their workstations from a shared folder on the server? If this is the case - unless they are really hammering the file server or their CAD drawings are 800 MB in size, I'd look to the network setup first.
 
I agree with @BlackLabTechs on verifying the network is up to speed first. Then if they are using a real server They should be running SAS 15K drives. At least to RAID arrays. One for the OS and one for data storage. I would advise against the RAID 0 as that is just doubling the chance for failure. They should have a RAID 1 for OS and RAID 1 or 10 for Data Storage. RAID 10 would offer better performance.
 
So they currently don't have a "domain"?
How many users?

For servers, we typically use "SAS" drives. 2.5".
10,000 rpm or 15,000 rpm

SSD are starting to get trusted...if...IF...you use the better brands. I only trust Crucial, Samsung, and Intel SSD drives. Wouldn't touch any other brand if you paid me to use them....see way too many fail. I'm not ready to trust them in servers though. We use them in firewalls, laptops, desktops.

Also we typically use RAID 1 (for the OS volume) and RAID 10 (for the data volume) on servers, staying away from RAID 0. RAID 0..if you blow a drive, ALL your data is gone.

If it's peer to peer..no domain, consider a NAS...like a larger Synology...with 4 or 6 WD RED disks.

Also, good fast network hardware is a must, use quality gigabit switches.

Also on workstations, stuff 'em with RAM...16 gigs or more. Fast disks on graphics workstations too...I often spec them with 10k SAS.
 
The point being from the previous posts is any good spinning disk setup in a server should outperform network speeds for a small shop. If the network is in fact the bottleneck, you can put all the ssd's you want in that server and workstations won't open the files any faster.

For a rude test, copy a large file from the server to local workstation and see what the average transfer speed is.....
 
How does the network setup look? Full gig network with good brand name gig switch? What is the average size of the files they are opening?

When you say they open files on the server - I'm assuming you mean they open on their workstations from a shared folder on the server? If this is the case - unless they are really hammering the file server or their CAD drawings are 800 MB in size, I'd look to the network setup first.

Haven't been able to fully examine it but there is a gigabit HP Switch 1810 but it only had 24 ports so they put in some crappy 8-port switch to serve some users as well... Will change this to a 48 port. They also have a Cisco ISA570 Firewall which I think acts DHCP etc. And yes, the files could actually exceed 1 TB, but I think they mostly copy over the files locally and then copy them back to the server.

I agree with @BlackLabTechs on verifying the network is up to speed first. Then if they are using a real server They should be running SAS 15K drives. At least to RAID arrays. One for the OS and one for data storage. I would advise against the RAID 0 as that is just doubling the chance for failure. They should have a RAID 1 for OS and RAID 1 or 10 for Data Storage. RAID 10 would offer better performance.

I suggested raid-0 for extra speed but maybe just using SSD would be enough of improvement.

So they currently don't have a "domain"?
How many users?

For servers, we typically use "SAS" drives. 2.5".
10,000 rpm or 15,000 rpm

SSD are starting to get trusted...if...IF...you use the better brands. I only trust Crucial, Samsung, and Intel SSD drives. Wouldn't touch any other brand if you paid me to use them....see way too many fail. I'm not ready to trust them in servers though. We use them in firewalls, laptops, desktops.

Also we typically use RAID 1 (for the OS volume) and RAID 10 (for the data volume) on servers, staying away from RAID 0. RAID 0..if you blow a drive, ALL your data is gone.

If it's peer to peer..no domain, consider a NAS...like a larger Synology...with 4 or 6 WD RED disks.

Also, good fast network hardware is a must, use quality gigabit switches.

Also on workstations, stuff 'em with RAM...16 gigs or more. Fast disks on graphics workstations too...I often spec them with 10k SAS.

They do have a domain but it seems only some clients are part of it, some are just set up as workgroup. I don't think they use any domain features, they only seem to be using it for fileserver. Their E-mail is setup externally etc so at this point I think the server only serve files through a share. The number of users would be at most 15 at the same time. Their workstations should be fine, i7 with SSD for OS-drive and 2TB data drives and 32 gig of ram, using high-end gaming graphic cards.

My idea was having a linux server with SSD and using multiple network cards with NIC bounding, anyone used that feature and "is it worth it"?

Im also thinking about outside access, either through VPN or by having the server files uploaded to Onedrive/Dropbox etc. The second option is very attractive because they sometimes need to send them outside the organization and being able to just make a quick link would be nice.
 
The point being from the previous posts is any good spinning disk setup in a server should outperform network speeds for a small shop. If the network is in fact the bottleneck, you can put all the ssd's you want in that server and workstations won't open the files any faster.

For a rude test, copy a large file from the server to local workstation and see what the average transfer speed is.....

I don't know if their cabling is cat5 or cat6, but if it's cat6 switching to a 10gbit switch should do wonders? Considering that they have a 1 gbit switch at this point? If the server had a 10 gbit NIC would it be a bottleneck anyway?
 
running 10gig to workstations could get rather pricey! I know you said files could be over 1 tb. how large on average is each file? Just trying to determine if it is the file throughput of the server that's slow or something else. I've been in a shop with 100 autocad users accessing a small number of servers before and no issues. some basic cad drawings, others had topo maps and such loading in as well using survey modules. That was back when 10/100 network was the norm.

I would still run some speed tests on the network by simply copying files from server to workstation to see if the max speed is there. I've seen everything from miswired jacks to someone manually setting the nic to half duplex slow down the network. You need to verify it's the file server that's actually slow.

Yes - a multi nic server setup correctly will help with multiple users - even using Windows Server OS.
 
TapaTech is the design/CAD network top gun around here...I'm sure he'll chime in.
The network performance is what you want to look at. Honestly...putting in SSD...overkill, and..IMO...still risky. Not to mention pricey when you want large storage.

We have a Synology RS2212+ NAS unit in our server rack, we use it for file storage...and we have a half dozen Western Digital hard drives in it using the Synology hybrid RAID. The WD Red drives are slower RPM..5,900 rpm. But in a good RAID setup....they keep up just fine with our gigabit network...they are NOT the bottleneck. I can have them doing sustained transfers of huge files at mid to upper 80's for transfer rates. Theoretically gigabit Xfers at 125 MB/s..but in the real world, towards 90 is normal....you have various bus overheads of the computers on each end.

Workstations with the data drive on a regular SATA...dunno if you'd benefit from trying to go 10 gig. The OS drive being SSD...they boot up quick first thing in the morning....but if they're using the SATA for storage and workspace...that's what they're working on all day long. So IMO, spending all that cash for 10 gig (that's a lot of money)....wouldn't be felt much.
 
You might not even need to replace the switch if they have many printers in the office - just ID the ports the printers are on and offload them to the cheap switch. Also check to be sure that there aren't 10/100 switches under people's desks, you want gigabit all the way to the PCs. Also check that the server's network connection is to the switch not to the firewall, because that could easily slow everything down.

I'd never consider RAID0 for anything in a business environment; if I had a situation where that was the only way to get needed performance with standard drives or SSDs then I'd jump directly to PCIe SSDs instead.

I'm going to hope you meant that the files could exceed 1GB - if individual files can exceed 1TB then there's something else going on and your SSD (or other storage) options are going to be very expensive.....
 
Solidworks but also Scene (Faro Scans). The Scene files could be up to 100 gb if but they work with that locally and then upload to the server.

Do they use PDM Server at all? Faro scanners are pretty cool but bleh, the point clouds are such a PITA to work with. SolidWorks Scanto3D is a cool plugin but can be limited. If memory serves correct, you can ZIP a point cloud file and it compresses dramatically... haven't worked with them in a while though so I may be 100% wrong on that one.

Solidworks files accessed directly over the network is inadvisable. Engineers should be using Pack and Go to pull the files and work on them locally. If they have common parts, they should be used in a part library. If they are using common parts and it's all network linked, then they really need to be using PDM so they can check in/out the files. Otherwise stuff can get really F'd up if someone saves a modification to a part accidentally.

EDIT: forgot to mention- if they are using SW network licenses, you're going to need a Windows server if you want that to act as the license server. In my experience, CAD stuff requires Windows.
 
Do they use PDM Server at all? Faro scanners are pretty cool but bleh, the point clouds are such a PITA to work with. SolidWorks Scanto3D is a cool plugin but can be limited. If memory serves correct, you can ZIP a point cloud file and it compresses dramatically... haven't worked with them in a while though so I may be 100% wrong on that one.

Solidworks files accessed directly over the network is inadvisable. Engineers should be using Pack and Go to pull the files and work on them locally. If they have common parts, they should be used in a part library. If they are using common parts and it's all network linked, then they really need to be using PDM so they can check in/out the files. Otherwise stuff can get really F'd up if someone saves a modification to a part accidentally.

EDIT: forgot to mention- if they are using SW network licenses, you're going to need a Windows server if you want that to act as the license server. In my experience, CAD stuff requires Windows.

They have a PDM, I haven't really gotten into exactly what that is and how it works but one of the engineers explained to me the benefits being able to work with different parts in the same project without locking the whole project. Is this handled by the server or within the actual project files, ie would you be able to put files on a NAS and still have that feature or do a "server software" need to keep track of it all?

The licensing stuff seems really old school, they have some license bound to MAC adress and they will soon get a dongle that can be moved between computers. What I don't get here is that you need a "server license" why the heck isn't the software checked against an Internet server at Solidworks instead?
 
If they are using SolidWorks PDM (the one that comes with the Pro/Premium versions of SW) then they need the PDM Server installed on a Windows machine. You can map the data store to a NAS, but I wouldn't recommend it. The software solves the multi-user and speed issues of the network:
  • All SW files and drawings are kept in a data Vault.
  • Anytime an engineer needs to work on a file/drawing, they "check out" that file so no one else can write to it.
  • When the file is checked out, the entire file is cached to the local machine that is working on it.
  • When the edits are complete, the entire file is checked back in and uploaded back to the Vault.
Licensing stuff is simple. The company purchases network licenses. If they have 10 engineers, instead of purchasing 10 regular licenses, they purchase 5 network licenses. They do that because typically only 5 engineers are working in SolidWorks at a time, but it could be any of the 10 individuals. MasterCAM does this too. It makes sense, a lot of sense. Saves money. Network license I'm pretty sure still comes with a dongle, and you need a Windows machine for it.

This CAD stuff is not cheap and trying to go with something minimal or Linux is going to be a mess! So I would start looking into good hardware first. It will be expensive. You probably want a RAID 10 data store with at least 2 NICS and load balancing. Full gigabit to everyone.
 
If they are using SolidWorks PDM (the one that comes with the Pro/Premium versions of SW) then they need the PDM Server installed on a Windows machine. You can map the data store to a NAS, but I wouldn't recommend it. The software solves the multi-user and speed issues of the network:
  • All SW files and drawings are kept in a data Vault.
  • Anytime an engineer needs to work on a file/drawing, they "check out" that file so no one else can write to it.
  • When the file is checked out, the entire file is cached to the local machine that is working on it.
  • When the edits are complete, the entire file is checked back in and uploaded back to the Vault.
Licensing stuff is simple. The company purchases network licenses. If they have 10 engineers, instead of purchasing 10 regular licenses, they purchase 5 network licenses. They do that because typically only 5 engineers are working in SolidWorks at a time, but it could be any of the 10 individuals. MasterCAM does this too. It makes sense, a lot of sense. Saves money. Network license I'm pretty sure still comes with a dongle, and you need a Windows machine for it.

This CAD stuff is not cheap and trying to go with something minimal or Linux is going to be a mess! So I would start looking into good hardware first. It will be expensive. You probably want a RAID 10 data store with at least 2 NICS and load balancing. Full gigabit to everyone.

Thank you for a well written reply. So what I can conclude is that we need to stick with Windows server (or at least Windows) in order to use PDM. Im thinking about stripping everything from the server beside CAD software and putting accounting etc in either a virtualized cloud solution or in a different machine using a different vlan so this machine is 100% dedicated to CAD.

What I meant about network license is that the "check" against how many licenses currently being used could just be performed online instead from a local server. It would of course require Internet access but I think that's better than requiring "network access". For example if someone would like to work with local files on a computer at home it would need VPN in order to start the software (to connect to the licensing server).

NICS are fairly cheap, are there any advantages in for example running 4 NICS compared to 2? Or are they "capped" so to speak at a certain point where the load balacing gets no further improvement?
 
Segregating the CAD server is a good idea. I know previous versions of SW PDM server would run on Windows 8.1 but have not checked recently. You may run into limitations with the OS and connections though.

LAN licensing is just the way it is. Oh well. You can "check out" a license for the weekend so you don't need to VPN home.

Multiple NICS will hit the disk speed limit. Probably having two gigabit NICS is enough, you need to balance it with your disk speed and number of concurrent transfers. Fortunately this is just big file stuff, so figuring it out is relatively straight forward.
 
Segregating the CAD server is a good idea. I know previous versions of SW PDM server would run on Windows 8.1 but have not checked recently. You may run into limitations with the OS and connections though.

LAN licensing is just the way it is. Oh well. You can "check out" a license for the weekend so you don't need to VPN home.

Multiple NICS will hit the disk speed limit. Probably having two gigabit NICS is enough, you need to balance it with your disk speed and number of concurrent transfers. Fortunately this is just big file stuff, so figuring it out is relatively straight forward.

Thanks again. What's your take on using domain/AD or not in this scenario? They do not need any centralized management of users, access controls etc they basically only need the PDM plus file storage. Im thinking that maybe having things like DNS on the server etc will slow it down compared to just using a regular hostname and no AD/DNS/RM running on the server.
 
Thanks again. What's your take on using domain/AD or not in this scenario? They do not need any centralized management of users, access controls etc they basically only need the PDM plus file storage. Im thinking that maybe having things like DNS on the server etc will slow it down compared to just using a regular hostname and no AD/DNS/RM running on the server.

No problem man! I have received so much help from this forum I am glad if I can help someone else. Domain/AD/DNS is a very important duty but also a very easy one for a small environment. I have lots of servers that are AD/DNS/DHCP as well as file servers. With almost 20 users, a domain would be a good idea. Depends on the environment though.
 
Im thinking that maybe having things like DNS on the server etc will slow it down compared to just using a regular hostname and no AD/DNS/RM running on the server.

DHCP 'n DNS really are very small...almost unnoticable loads. Unless you're running a Pentium 1 75MHz...ain't really gonna see any slowdown.
Makes life sooo much easier for the admin. Deploy printers, control access to folders (with a biz of ~20 users....may be good to isolate management/admin/accounting from the general shares).
 
Back
Top