Time and again Google and the other search engines have explained the fact that they do understand that not all duplicate content is malicious. In fact, Google even expressly gives examples of non-malicious duplicate content (i.e. pages that are stripped-down for mobile devices but still essentially contain the same content and printer-only versions of web pages) in the Webmaster Tools Help page on the topic. Furthermore, they also say that snippets and quotes are not considered to be duplicate content, so that you do not even have to worry about being penalized SEO-wise for this.
Despite this fact, duplicate content is still a big issue in SEO, not for fear of being penalized but because Google might rank a URL of a web page with duplicate content higher than the preferred URL. The ramifications of Google showing the non-preferred link in its search result range from usability issues (i.e. users will go to the page with the stripped-down version) to monetization issues (i.e. all the Ads will probably on the preferred URL).
Several years ago, it was recommended that crawler access to duplicate content simply be blocked, usually using the robots.txt file. However, this is now actually discouraged. Google suggests that the best way to go is to when it comes to ensuring that they show the preferred URL in their SERPs is to simply tell them which your preferred URL this. This method is called canonicalization, and can be actually be done several ways. The preferred way nowadays, since all major search engines use this method, is to simply use the canonical tag by adding a <link> element with the attribute rel=”canonical” to the <head> section of non-canonical pages.
Another way to address duplicate content issue is to use a 301 redirect. Note though that with this method the user, and not just the bot, is redirected to the preferred URL. This won’t work if some users really need to get to the non-preferred URL.