Not really. Mainly because when a web browser opens a webpage to view it, the requests it sends are identical to the requests a link scraper would use to download the whole site.
Let's say you have a pay site that contains thousands of photos that you are charging a monthly subscription fee for, and you add new content on a weekly basis. You have a hundred galleries, and each gallery contains 20-30 photos. You paid good money for these photos and the hard-working models in them, so you want to protect your property.
When the web browser loads each gallery, it's going to make 20-30 concurrent requests for each photo on the page. Once the request is made, you have zero control over what happens to it after it leaves your server.
You could attempt things like:
Rate limiting
Ensure that no more than 3 requests can be served from your protected content per second. But this just makes an irritating experience for your legitimate users who are just trying to load their po^H^Hcontent. And all it does for your scrapers is slow them down by a few seconds.
Referrer checking
This is often used for blocking hotlinking - making sure that every resource that's requested has a valid HTTP referrer that matches your white-listed domains. But any decent leaching software is going to send a valid referrer anyway, and it's incredibly trivial to forge, so you're wasting your energy here.
Disable right-clicking
This will only deter the most simplest method of downloading content - right-clicking and choosing "Save Target As..." and is incredibly simple to circumvent
And, at the end of the day, your users just have to press the Print Screen button to take a copy of anything that's displayed on their screen, at all.
tl;dr
You can't prevent this.