The short answer is no.
First off all it is important to understand that the frequency affects the possible maximum transfer rate, although it's not achieved in every scenario. It's just the upper limit. Surely, if you run a benchmark with those two, the higher frequency will very likely result in a higher transfer rate, but it's not really important if you're using Lightroom.
Furthermore, those DIMMs (Dual Inline Memory Module, or just your memory for that purpose) have even more numbers that matter when it comes down to how they perform. Very often you see several numbers after the frequency, written like CL7-7-7-18. Those actually define the amounts of clock cycles the memory will take to process a desired operation. For a module that has the just mentioned values it will take 7 full cycles to return the desired value. All that is pretty theoretical stuff as you won't notice it as a user.
I would not worry too much about all those technical stuff as the effect will not be noticeable and probably only be measurable in benchmarks.
However what you should care about is something that is called Dual Channel Memory. What this effectively means is that your RAM works just like a RAID 0. You can think of it as every second bit beeing written to the other DIMM. So instead of one DIMM having to write 10 Mbyte for instance, both have to write 5 MByte resulting in a performance boost.
Therefore four DIMMs won't really increase your memory performance, compared to just two DIMMs as you then have two Dual Channel …"clusters" or how you want to call it. Four DIMMs only perform better if your mainboard supports Quad Channel. Don't worry about that terminology too much, pretty much every mainboard supports Dual Channel memory, so you're good to go.
More important for using LR is definitely the CPU. If you tweak a slider in LR, the CPU does almost all the work for you, so it's mandatory to have enough CPU performance.
Since GPU support was also mentioned before: It might improve editing in some sceneraios, but as null mentioned, the developer has to implement it. Only very few functions of LR support using your GPU (currently), yet it slows down other parts of the program. That's why I have it turned off. All your basic editing is only done on the CPU. Therefore I would not pay too much attention to choosing a GPU for LR either.
Greetings
Rusher0