In a RAID0, read amplification is nil and write amplification is nil.
In a RAID10, reads are decreased by the factor of two, and write amplification is nil.
In a RAID5, read amplification on reads is nil. Write amplification is in best case nil (full row write) and in worst case 2x write amplification + 2x additional reads per write (does the SSD even care about that?). How this mixes in the actual use depends on write profile (sequential writes or random writes). In actual use, nothing really staggering here, unless caching fails to mitigate effect of random writes (in which case you get 2x write amplification).
If someone has to replace caches every six months or so, they are really pushing large amounts, or alternatively they need to change their supplier.
I'm sorry but that's demonstrably false. In RAID 0 any file larger than the stripe size will be read from all disks that contain it. This read operation imposes slight wear on all SSD disks asked to read. This will cause RAID 0 SSDs to degrade slightly faster than nonRAID SSDs of the same type.
RAID10 has the same issue, but because it's not using a parity stripe the two disks in each mirrored pair will wear evenly.
RAID5 every disk that contains the file, including the parity bits is read every time a given file is requested. You can call them blocks because they contain bits of files, but all of this juggling is done by the controller. RAID6 is even worse because it's adding yet another parity stripe to keep tabs on.
With SSD technology, reads aren't free but almost so, writes are very expensive.
But honestly, the SSD technology has absorbed so much of this into the drive that we don't really have to think about it. All we really need concern ourselves with is the write endurance of the disk against the load that's going to be put on the array. SSD reliability has reached a point where I'm honestly wondering if RAID for any purpose other than performance has any value. Any RAID controller today is going to spread the work over all the disks to ensure even wear, the disk is doing to do its own thing to maintain itself internally. So the reality is, you're looking at a bunch of devices with no moving parts, and therefore no unpredictable failing characteristics, all designed to work as a team and therefore will wear evenly. Just like the breaks on your car, you don't replace just one pad, you do all four on the axle. I wonder if when we do actually have disk faults with SSDs that the risk of cascade fault through the entire array. If all the disks fail, the array is obviously dead. If the disk are failing predictably with software alerts cluing us in... do we even need RAID anymore? Sure an SSD can go bad mid run, but it's extremely rare. It's very similar to having a memory stick go bad outside of the first month of operation. RAID 10 could be made to work and offset all the risks, but to do so the second half of the mirror would need made out of different disks than the first. I know of zero people that do this when deploying servers, we just stuff in the drives and set the thing up.
TLDR, SSD may have invalidated RAID for any purpose other than obtaining a single larger logical volume to store stuff. But I'm not crazy enough to risk my customers on that, so I stick to old tried and true methods until such time as a decade or to worth of actual use proves we can safely remove the RAID.